Jan 30 14:03:24.126735 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:03:24.126786 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:03:24.126803 kernel: BIOS-provided physical RAM map: Jan 30 14:03:24.126816 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 14:03:24.126876 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 14:03:24.126891 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 14:03:24.126908 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 14:03:24.126929 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 14:03:24.126945 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 14:03:24.126960 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 14:03:24.126976 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 14:03:24.126991 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 14:03:24.127006 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 14:03:24.127022 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 14:03:24.127045 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 14:03:24.127062 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 14:03:24.127079 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 14:03:24.127096 kernel: NX (Execute Disable) protection: active Jan 30 14:03:24.127112 kernel: APIC: Static calls initialized Jan 30 14:03:24.127128 kernel: efi: EFI v2.7 by EDK II Jan 30 14:03:24.127145 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 14:03:24.127161 kernel: SMBIOS 2.4 present. Jan 30 14:03:24.127177 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 14:03:24.127193 kernel: Hypervisor detected: KVM Jan 30 14:03:24.127212 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 14:03:24.127228 kernel: kvm-clock: using sched offset of 12700318126 cycles Jan 30 14:03:24.127245 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 14:03:24.127261 kernel: tsc: Detected 2299.998 MHz processor Jan 30 14:03:24.127278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:03:24.127295 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:03:24.127310 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 14:03:24.127326 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 14:03:24.127341 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:03:24.127361 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 14:03:24.127376 kernel: Using GB pages for direct mapping Jan 30 14:03:24.127392 kernel: Secure boot disabled Jan 30 14:03:24.127408 kernel: ACPI: Early table checksum verification disabled Jan 30 14:03:24.127423 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 14:03:24.127440 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 14:03:24.127455 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 14:03:24.127479 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 14:03:24.127499 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 14:03:24.127517 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 14:03:24.127535 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 14:03:24.127553 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 14:03:24.127570 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 14:03:24.127588 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 14:03:24.127608 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 14:03:24.127626 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 14:03:24.127643 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 14:03:24.127660 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 14:03:24.127678 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 14:03:24.127703 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 14:03:24.127721 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 14:03:24.127738 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 14:03:24.127756 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 14:03:24.127778 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 14:03:24.127796 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 14:03:24.127814 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 14:03:24.127855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 14:03:24.127873 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 14:03:24.127892 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 14:03:24.127910 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 14:03:24.127928 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 14:03:24.127947 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 30 14:03:24.127968 kernel: Zone ranges: Jan 30 14:03:24.127986 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:03:24.128004 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 14:03:24.128022 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 14:03:24.128040 kernel: Movable zone start for each node Jan 30 14:03:24.128058 kernel: Early memory node ranges Jan 30 14:03:24.128076 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 14:03:24.128094 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 14:03:24.128112 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 14:03:24.128135 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 14:03:24.128153 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 14:03:24.128171 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 14:03:24.128189 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:03:24.128207 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 14:03:24.128225 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 14:03:24.128244 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 14:03:24.128262 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 14:03:24.128280 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 14:03:24.128302 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 14:03:24.128320 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 14:03:24.128338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 14:03:24.128356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:03:24.128373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 14:03:24.128392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 14:03:24.128410 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:03:24.128428 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 14:03:24.128446 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 14:03:24.128469 kernel: Booting paravirtualized kernel on KVM Jan 30 14:03:24.128487 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:03:24.128506 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 14:03:24.128524 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 14:03:24.128542 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 14:03:24.128560 kernel: pcpu-alloc: [0] 0 1 Jan 30 14:03:24.128578 kernel: kvm-guest: PV spinlocks enabled Jan 30 14:03:24.128596 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 14:03:24.128616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:03:24.128639 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:03:24.128657 kernel: random: crng init done Jan 30 14:03:24.128673 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 14:03:24.128691 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:03:24.128717 kernel: Fallback order for Node 0: 0 Jan 30 14:03:24.128735 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 14:03:24.128753 kernel: Policy zone: Normal Jan 30 14:03:24.128771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:03:24.128794 kernel: software IO TLB: area num 2. Jan 30 14:03:24.128811 kernel: Memory: 7513372K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346952K reserved, 0K cma-reserved) Jan 30 14:03:24.128843 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:03:24.128860 kernel: Kernel/User page tables isolation: enabled Jan 30 14:03:24.128879 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:03:24.128896 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:03:24.128913 kernel: Dynamic Preempt: voluntary Jan 30 14:03:24.128931 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:03:24.128952 kernel: rcu: RCU event tracing is enabled. Jan 30 14:03:24.128989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:03:24.129009 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:03:24.129028 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:03:24.129051 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:03:24.129070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:03:24.129090 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:03:24.129109 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 14:03:24.129128 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:03:24.129148 kernel: Console: colour dummy device 80x25 Jan 30 14:03:24.129171 kernel: printk: console [ttyS0] enabled Jan 30 14:03:24.129191 kernel: ACPI: Core revision 20230628 Jan 30 14:03:24.129211 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:03:24.129229 kernel: x2apic enabled Jan 30 14:03:24.129249 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 14:03:24.129269 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 14:03:24.129288 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 14:03:24.129308 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 14:03:24.129332 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 14:03:24.129351 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 14:03:24.129371 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:03:24.129390 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 14:03:24.129410 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 14:03:24.129429 kernel: Spectre V2 : Mitigation: IBRS Jan 30 14:03:24.129448 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:03:24.129468 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 14:03:24.129487 kernel: RETBleed: Mitigation: IBRS Jan 30 14:03:24.129511 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:03:24.129530 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 14:03:24.129550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:03:24.129570 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 14:03:24.129589 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 14:03:24.129609 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:03:24.129627 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:03:24.129647 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:03:24.129667 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:03:24.129690 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 14:03:24.129716 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:03:24.129736 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:03:24.129755 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:03:24.129774 kernel: landlock: Up and running. Jan 30 14:03:24.129793 kernel: SELinux: Initializing. Jan 30 14:03:24.129813 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.129851 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.129866 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 14:03:24.129886 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:03:24.129902 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:03:24.129918 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:03:24.129934 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 14:03:24.129950 kernel: signal: max sigframe size: 1776 Jan 30 14:03:24.129968 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:03:24.129987 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:03:24.130004 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 14:03:24.130022 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:03:24.130045 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:03:24.130064 kernel: .... node #0, CPUs: #1 Jan 30 14:03:24.130083 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 14:03:24.130101 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 14:03:24.130118 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:03:24.130136 kernel: smpboot: Max logical packages: 1 Jan 30 14:03:24.130154 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 14:03:24.130171 kernel: devtmpfs: initialized Jan 30 14:03:24.130193 kernel: x86/mm: Memory block size: 128MB Jan 30 14:03:24.130211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 14:03:24.130229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:03:24.130245 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:03:24.130264 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:03:24.130282 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:03:24.130300 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:03:24.130319 kernel: audit: type=2000 audit(1738245802.743:1): state=initialized audit_enabled=0 res=1 Jan 30 14:03:24.130337 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:03:24.130359 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:03:24.130378 kernel: cpuidle: using governor menu Jan 30 14:03:24.130394 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:03:24.130413 kernel: dca service started, version 1.12.1 Jan 30 14:03:24.130431 kernel: PCI: Using configuration type 1 for base access Jan 30 14:03:24.130449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:03:24.130467 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:03:24.130485 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:03:24.130504 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:03:24.130526 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:03:24.130545 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:03:24.130564 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:03:24.130582 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:03:24.130601 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:03:24.130620 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 14:03:24.130638 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 14:03:24.130657 kernel: ACPI: Interpreter enabled Jan 30 14:03:24.130673 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 14:03:24.130703 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:03:24.130721 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:03:24.130740 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 14:03:24.130758 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 14:03:24.130777 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:03:24.131093 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:03:24.131296 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 14:03:24.131496 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 14:03:24.131520 kernel: PCI host bridge to bus 0000:00 Jan 30 14:03:24.131724 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:03:24.131913 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:03:24.132081 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:03:24.132244 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 14:03:24.132409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:03:24.132614 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 14:03:24.132848 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 14:03:24.133046 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 14:03:24.133236 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 14:03:24.133480 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 14:03:24.133668 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 14:03:24.133882 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 14:03:24.134074 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:03:24.134257 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 14:03:24.134436 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 14:03:24.134623 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 14:03:24.134870 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 14:03:24.135070 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 14:03:24.135102 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 14:03:24.135123 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 14:03:24.135142 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 14:03:24.135162 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 14:03:24.135182 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 14:03:24.135201 kernel: iommu: Default domain type: Translated Jan 30 14:03:24.135221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:03:24.135241 kernel: efivars: Registered efivars operations Jan 30 14:03:24.135260 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:03:24.135284 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:03:24.135304 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 14:03:24.135324 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 14:03:24.135344 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 14:03:24.135362 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 14:03:24.135382 kernel: vgaarb: loaded Jan 30 14:03:24.135402 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 14:03:24.135422 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:03:24.135442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:03:24.135465 kernel: pnp: PnP ACPI init Jan 30 14:03:24.135484 kernel: pnp: PnP ACPI: found 7 devices Jan 30 14:03:24.135504 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:03:24.135525 kernel: NET: Registered PF_INET protocol family Jan 30 14:03:24.135545 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 14:03:24.135565 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 14:03:24.135585 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:03:24.135603 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:03:24.135623 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 14:03:24.135647 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 14:03:24.135667 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.135687 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.135724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:03:24.135744 kernel: NET: Registered PF_XDP protocol family Jan 30 14:03:24.135985 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:03:24.136153 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:03:24.136320 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:03:24.136492 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 14:03:24.136683 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 14:03:24.136718 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:03:24.136739 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:03:24.136759 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 14:03:24.136779 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 14:03:24.136800 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 14:03:24.136845 kernel: clocksource: Switched to clocksource tsc Jan 30 14:03:24.136869 kernel: Initialise system trusted keyrings Jan 30 14:03:24.136885 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 14:03:24.136903 kernel: Key type asymmetric registered Jan 30 14:03:24.136919 kernel: Asymmetric key parser 'x509' registered Jan 30 14:03:24.136937 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:03:24.136954 kernel: io scheduler mq-deadline registered Jan 30 14:03:24.136972 kernel: io scheduler kyber registered Jan 30 14:03:24.136990 kernel: io scheduler bfq registered Jan 30 14:03:24.137008 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:03:24.137031 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 14:03:24.137234 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 14:03:24.137258 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 14:03:24.137445 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 14:03:24.137468 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 14:03:24.137655 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 14:03:24.137681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:03:24.137710 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:03:24.137730 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 14:03:24.137755 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 14:03:24.137776 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 14:03:24.138044 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 14:03:24.138075 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 14:03:24.138095 kernel: i8042: Warning: Keylock active Jan 30 14:03:24.138114 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 14:03:24.138134 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 14:03:24.138315 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 14:03:24.138497 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 14:03:24.138668 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T14:03:23 UTC (1738245803) Jan 30 14:03:24.138892 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 14:03:24.138915 kernel: intel_pstate: CPU model not supported Jan 30 14:03:24.138935 kernel: pstore: Using crash dump compression: deflate Jan 30 14:03:24.138953 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 14:03:24.138971 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:03:24.138990 kernel: Segment Routing with IPv6 Jan 30 14:03:24.139025 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:03:24.139045 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:03:24.139065 kernel: Key type dns_resolver registered Jan 30 14:03:24.139083 kernel: IPI shorthand broadcast: enabled Jan 30 14:03:24.139102 kernel: sched_clock: Marking stable (880004826, 165202055)->(1157908919, -112702038) Jan 30 14:03:24.139119 kernel: registered taskstats version 1 Jan 30 14:03:24.139137 kernel: Loading compiled-in X.509 certificates Jan 30 14:03:24.139157 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:03:24.139175 kernel: Key type .fscrypt registered Jan 30 14:03:24.139198 kernel: Key type fscrypt-provisioning registered Jan 30 14:03:24.139218 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:03:24.139238 kernel: ima: No architecture policies found Jan 30 14:03:24.139256 kernel: clk: Disabling unused clocks Jan 30 14:03:24.139273 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:03:24.139290 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:03:24.139310 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:03:24.139328 kernel: Run /init as init process Jan 30 14:03:24.139351 kernel: with arguments: Jan 30 14:03:24.139370 kernel: /init Jan 30 14:03:24.139388 kernel: with environment: Jan 30 14:03:24.139406 kernel: HOME=/ Jan 30 14:03:24.139423 kernel: TERM=linux Jan 30 14:03:24.139442 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:03:24.139460 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 14:03:24.139485 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:03:24.139514 systemd[1]: Detected virtualization google. Jan 30 14:03:24.139535 systemd[1]: Detected architecture x86-64. Jan 30 14:03:24.139552 systemd[1]: Running in initrd. Jan 30 14:03:24.139571 systemd[1]: No hostname configured, using default hostname. Jan 30 14:03:24.139590 systemd[1]: Hostname set to . Jan 30 14:03:24.139610 systemd[1]: Initializing machine ID from random generator. Jan 30 14:03:24.139629 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:03:24.139649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:03:24.139674 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:03:24.139702 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:03:24.139724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:03:24.139745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:03:24.139763 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:03:24.139786 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:03:24.139810 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:03:24.139853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:03:24.139873 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:03:24.139924 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:03:24.139944 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:03:24.139963 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:03:24.139981 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:03:24.140005 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:03:24.140022 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:03:24.140046 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:03:24.140071 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:03:24.140095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:03:24.140117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:03:24.140136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:03:24.140157 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:03:24.140182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:03:24.140203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:03:24.140225 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:03:24.140245 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:03:24.140266 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:03:24.140287 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:03:24.140308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:03:24.140372 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 14:03:24.140422 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:03:24.140443 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:03:24.140463 systemd-journald[183]: Journal started Jan 30 14:03:24.140508 systemd-journald[183]: Runtime Journal (/run/log/journal/adee386b4e8a495ca0d8f2d7e983561a) is 8.0M, max 148.7M, 140.7M free. Jan 30 14:03:24.145934 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:03:24.150957 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:03:24.152166 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 14:03:24.170118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:03:24.173393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:03:24.185799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:24.198626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:03:24.204859 kernel: Bridge firewalling registered Jan 30 14:03:24.205993 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 14:03:24.213139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:03:24.216805 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:03:24.225424 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:03:24.230353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:03:24.242068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:03:24.244080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:03:24.273146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:03:24.273802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:03:24.280047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:03:24.291139 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:24.300106 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:03:24.329795 systemd-resolved[216]: Positive Trust Anchors: Jan 30 14:03:24.330417 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:03:24.330648 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:03:24.351121 dracut-cmdline[218]: dracut-dracut-053 Jan 30 14:03:24.351121 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:03:24.337348 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 30 14:03:24.339800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:03:24.377097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:03:24.444875 kernel: SCSI subsystem initialized Jan 30 14:03:24.455877 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:03:24.467875 kernel: iscsi: registered transport (tcp) Jan 30 14:03:24.492348 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:03:24.492446 kernel: QLogic iSCSI HBA Driver Jan 30 14:03:24.545620 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:03:24.560054 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:03:24.590333 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:03:24.590425 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:03:24.592575 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:03:24.637881 kernel: raid6: avx2x4 gen() 17758 MB/s Jan 30 14:03:24.654875 kernel: raid6: avx2x2 gen() 17871 MB/s Jan 30 14:03:24.672416 kernel: raid6: avx2x1 gen() 13807 MB/s Jan 30 14:03:24.672487 kernel: raid6: using algorithm avx2x2 gen() 17871 MB/s Jan 30 14:03:24.690306 kernel: raid6: .... xor() 17437 MB/s, rmw enabled Jan 30 14:03:24.690376 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:03:24.713860 kernel: xor: automatically using best checksumming function avx Jan 30 14:03:24.895855 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:03:24.909156 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:03:24.916078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:03:24.947673 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 30 14:03:24.954695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:03:24.965186 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:03:24.995220 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 14:03:25.033511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:03:25.044120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:03:25.136656 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:03:25.150145 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:03:25.189908 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:03:25.204524 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:03:25.232979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:03:25.254995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:03:25.293289 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:03:25.293349 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:03:25.292147 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:03:25.346009 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 14:03:25.339897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:03:25.340125 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:25.443979 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:03:25.444029 kernel: AES CTR mode by8 optimization enabled Jan 30 14:03:25.444055 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 14:03:25.489981 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 14:03:25.490265 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 14:03:25.490501 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 14:03:25.490722 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:03:25.490974 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:03:25.490999 kernel: GPT:17805311 != 25165823 Jan 30 14:03:25.491020 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:03:25.491041 kernel: GPT:17805311 != 25165823 Jan 30 14:03:25.491062 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:03:25.491084 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.491116 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 14:03:25.365723 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:03:25.375361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:03:25.375623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:25.396038 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:03:25.441317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:03:25.494143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:03:25.569508 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (457) Jan 30 14:03:25.575484 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 14:03:25.596029 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (454) Jan 30 14:03:25.607335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:25.620567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 14:03:25.632814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 14:03:25.646205 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 14:03:25.673758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 14:03:25.699072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:03:25.732077 disk-uuid[538]: Primary Header is updated. Jan 30 14:03:25.732077 disk-uuid[538]: Secondary Entries is updated. Jan 30 14:03:25.732077 disk-uuid[538]: Secondary Header is updated. Jan 30 14:03:25.740184 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:03:25.771088 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.785891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.808849 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.817633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:26.805851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:26.807926 disk-uuid[539]: The operation has completed successfully. Jan 30 14:03:26.885447 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:03:26.885595 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:03:26.923106 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:03:26.943379 sh[565]: Success Jan 30 14:03:26.955969 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:03:27.052806 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:03:27.059998 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:03:27.090487 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:03:27.129933 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:03:27.130028 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:27.130056 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:03:27.139392 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:03:27.146253 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:03:27.184879 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:03:27.192064 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:03:27.193077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:03:27.203097 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:03:27.271247 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:27.271291 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:27.271315 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:03:27.271336 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:03:27.271357 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:03:27.271349 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:03:27.302065 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:27.283612 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:03:27.300734 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:03:27.338109 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:03:27.446659 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:03:27.457325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:03:27.545477 systemd-networkd[748]: lo: Link UP Jan 30 14:03:27.545490 systemd-networkd[748]: lo: Gained carrier Jan 30 14:03:27.546504 ignition[655]: Ignition 2.19.0 Jan 30 14:03:27.548646 systemd-networkd[748]: Enumeration completed Jan 30 14:03:27.546517 ignition[655]: Stage: fetch-offline Jan 30 14:03:27.549562 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:03:27.546590 ignition[655]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.549570 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:03:27.546610 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.550660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:03:27.546795 ignition[655]: parsed url from cmdline: "" Jan 30 14:03:27.551674 systemd-networkd[748]: eth0: Link UP Jan 30 14:03:27.546804 ignition[655]: no config URL provided Jan 30 14:03:27.551680 systemd-networkd[748]: eth0: Gained carrier Jan 30 14:03:27.546815 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:03:27.551691 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:03:27.546885 ignition[655]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:03:27.567951 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.55/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 14:03:27.547665 ignition[655]: failed to fetch config: resource requires networking Jan 30 14:03:27.576289 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:03:27.548039 ignition[655]: Ignition finished successfully Jan 30 14:03:27.594803 systemd[1]: Reached target network.target - Network. Jan 30 14:03:27.656594 ignition[757]: Ignition 2.19.0 Jan 30 14:03:27.615068 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:03:27.656605 ignition[757]: Stage: fetch Jan 30 14:03:27.668224 unknown[757]: fetched base config from "system" Jan 30 14:03:27.656809 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.668236 unknown[757]: fetched base config from "system" Jan 30 14:03:27.656836 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.668246 unknown[757]: fetched user config from "gcp" Jan 30 14:03:27.656980 ignition[757]: parsed url from cmdline: "" Jan 30 14:03:27.670812 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:03:27.656988 ignition[757]: no config URL provided Jan 30 14:03:27.688199 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:03:27.656997 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:03:27.735854 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:03:27.657013 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:03:27.755059 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:03:27.657042 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 14:03:27.787343 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:03:27.661703 ignition[757]: GET result: OK Jan 30 14:03:27.807748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:03:27.661807 ignition[757]: parsing config with SHA512: 865f2328c43cc8f208358fe9f7a6efecb7b9290e98e29d410e7d6711daa19e5d648821b43d6be21db53b778acff0c0e0cca52fbbeedc7d8468a46052acdab04e Jan 30 14:03:27.825214 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:03:27.668947 ignition[757]: fetch: fetch complete Jan 30 14:03:27.845114 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:03:27.668955 ignition[757]: fetch: fetch passed Jan 30 14:03:27.851192 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:03:27.669013 ignition[757]: Ignition finished successfully Jan 30 14:03:27.868226 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:03:27.733287 ignition[763]: Ignition 2.19.0 Jan 30 14:03:27.889298 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:03:27.733296 ignition[763]: Stage: kargs Jan 30 14:03:27.733507 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.733519 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.734632 ignition[763]: kargs: kargs passed Jan 30 14:03:27.734691 ignition[763]: Ignition finished successfully Jan 30 14:03:27.775144 ignition[768]: Ignition 2.19.0 Jan 30 14:03:27.775153 ignition[768]: Stage: disks Jan 30 14:03:27.775362 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.775374 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.776368 ignition[768]: disks: disks passed Jan 30 14:03:27.776437 ignition[768]: Ignition finished successfully Jan 30 14:03:27.954938 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:03:28.139014 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:03:28.172032 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:03:28.291874 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:03:28.292387 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:03:28.293308 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:03:28.325976 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:03:28.348969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:03:28.357569 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:03:28.357658 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:03:28.357704 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:03:28.368869 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (785) Jan 30 14:03:28.389789 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:28.389909 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:28.389938 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:03:28.416991 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:03:28.417078 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:03:28.460219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:03:28.469217 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:03:28.493154 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:03:28.627452 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:03:28.639015 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:03:28.648966 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:03:28.659987 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:03:28.821966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:03:28.826986 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:03:28.847106 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:03:28.877640 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:03:28.895114 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:28.928911 ignition[897]: INFO : Ignition 2.19.0 Jan 30 14:03:28.928911 ignition[897]: INFO : Stage: mount Jan 30 14:03:28.952029 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:28.952029 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:28.952029 ignition[897]: INFO : mount: mount passed Jan 30 14:03:28.952029 ignition[897]: INFO : Ignition finished successfully Jan 30 14:03:28.933284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:03:28.941155 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:03:28.976396 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:03:28.997188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:03:29.081031 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (910) Jan 30 14:03:29.081081 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:29.081098 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:29.081112 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:03:29.096558 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:03:29.096676 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:03:29.101029 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:03:29.102375 systemd-networkd[748]: eth0: Gained IPv6LL Jan 30 14:03:29.141324 ignition[927]: INFO : Ignition 2.19.0 Jan 30 14:03:29.141324 ignition[927]: INFO : Stage: files Jan 30 14:03:29.158015 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:29.158015 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:29.158015 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:03:29.158015 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:03:29.158015 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 14:03:29.154017 unknown[927]: wrote ssh authorized keys file for user: core Jan 30 14:03:31.376266 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:03:31.621007 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 14:03:31.935899 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:03:32.453218 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:32.453218 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:03:32.492050 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:03:32.492050 ignition[927]: INFO : files: files passed Jan 30 14:03:32.492050 ignition[927]: INFO : Ignition finished successfully Jan 30 14:03:32.458241 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:03:32.478113 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:03:32.508677 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:03:32.520606 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:03:32.708051 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:03:32.708051 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:03:32.520751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:03:32.746156 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:03:32.606424 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:03:32.632478 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:03:32.654078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:03:32.742917 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:03:32.743057 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:03:32.757282 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:03:32.782073 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:03:32.802157 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:03:32.809071 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:03:32.875619 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:03:32.904146 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:03:32.924887 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:03:32.939352 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:03:32.961340 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:03:32.980332 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:03:32.980541 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:03:33.008390 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:03:33.035311 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:03:33.045413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:03:33.060366 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:03:33.078388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:03:33.098420 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:03:33.116445 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:03:33.133403 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:03:33.154427 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:03:33.171395 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:03:33.188434 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:03:33.188672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:03:33.219416 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:03:33.229436 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:03:33.248358 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:03:33.248575 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:03:33.268346 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:03:33.268542 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:03:33.307337 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:03:33.307573 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:03:33.316451 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:03:33.316634 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:03:33.385050 ignition[979]: INFO : Ignition 2.19.0 Jan 30 14:03:33.385050 ignition[979]: INFO : Stage: umount Jan 30 14:03:33.385050 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:33.385050 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:33.343255 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:03:33.422260 ignition[979]: INFO : umount: umount passed Jan 30 14:03:33.422260 ignition[979]: INFO : Ignition finished successfully Jan 30 14:03:33.406150 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:03:33.407189 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:03:33.407403 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:03:33.476363 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:03:33.476543 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:03:33.511065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:03:33.512225 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:03:33.512343 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:03:33.534903 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:03:33.535050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:03:33.545411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:03:33.545556 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:03:33.560569 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:03:33.560634 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:03:33.587218 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:03:33.587300 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:03:33.612245 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:03:33.612325 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:03:33.639232 systemd[1]: Stopped target network.target - Network. Jan 30 14:03:33.648238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:03:33.648332 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:03:33.674197 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:03:33.682201 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:03:33.685947 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:03:33.698256 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:03:33.724163 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:03:33.733301 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:03:33.733363 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:03:33.749340 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:03:33.749402 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:03:33.784198 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:03:33.784289 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:03:33.793287 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:03:33.793371 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:03:33.810301 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:03:33.810390 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:03:33.848432 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:03:33.853951 systemd-networkd[748]: eth0: DHCPv6 lease lost Jan 30 14:03:33.868250 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:03:33.886597 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:03:33.886742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:03:33.913815 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:03:33.914128 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:03:33.924162 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:03:33.924223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:03:33.963006 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:03:33.983995 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:03:33.984126 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:03:33.995242 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:03:33.995328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:03:34.005294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:03:34.005371 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:03:34.033211 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:03:34.033305 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:03:34.054324 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:03:34.072663 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:03:34.072875 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:03:34.100313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:03:34.100467 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:03:34.111251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:03:34.111304 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:03:34.138180 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:03:34.138268 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:03:34.167331 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:03:34.167416 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:03:34.194310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:03:34.194413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:34.240207 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:03:34.515039 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 14:03:34.243186 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:03:34.243274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:03:34.292222 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:03:34.292323 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:03:34.314212 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:03:34.314291 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:03:34.336195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:03:34.336280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:34.345916 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:03:34.346059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:03:34.365655 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:03:34.365802 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:03:34.384440 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:03:34.417081 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:03:34.460366 systemd[1]: Switching root. Jan 30 14:03:34.665994 systemd-journald[183]: Journal stopped Jan 30 14:03:24.126735 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 14:03:24.126786 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:03:24.126803 kernel: BIOS-provided physical RAM map: Jan 30 14:03:24.126816 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 30 14:03:24.126876 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 30 14:03:24.126891 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 30 14:03:24.126908 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 30 14:03:24.126929 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 30 14:03:24.126945 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 30 14:03:24.126960 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 30 14:03:24.126976 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 30 14:03:24.126991 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 30 14:03:24.127006 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 30 14:03:24.127022 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 30 14:03:24.127045 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 30 14:03:24.127062 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 30 14:03:24.127079 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 30 14:03:24.127096 kernel: NX (Execute Disable) protection: active Jan 30 14:03:24.127112 kernel: APIC: Static calls initialized Jan 30 14:03:24.127128 kernel: efi: EFI v2.7 by EDK II Jan 30 14:03:24.127145 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 30 14:03:24.127161 kernel: SMBIOS 2.4 present. Jan 30 14:03:24.127177 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 30 14:03:24.127193 kernel: Hypervisor detected: KVM Jan 30 14:03:24.127212 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 14:03:24.127228 kernel: kvm-clock: using sched offset of 12700318126 cycles Jan 30 14:03:24.127245 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 14:03:24.127261 kernel: tsc: Detected 2299.998 MHz processor Jan 30 14:03:24.127278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:03:24.127295 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:03:24.127310 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 30 14:03:24.127326 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 30 14:03:24.127341 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:03:24.127361 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 30 14:03:24.127376 kernel: Using GB pages for direct mapping Jan 30 14:03:24.127392 kernel: Secure boot disabled Jan 30 14:03:24.127408 kernel: ACPI: Early table checksum verification disabled Jan 30 14:03:24.127423 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 30 14:03:24.127440 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 30 14:03:24.127455 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 30 14:03:24.127479 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 30 14:03:24.127499 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 30 14:03:24.127517 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 30 14:03:24.127535 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 30 14:03:24.127553 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 30 14:03:24.127570 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 30 14:03:24.127588 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 30 14:03:24.127608 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 30 14:03:24.127626 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 30 14:03:24.127643 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 30 14:03:24.127660 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 30 14:03:24.127678 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 30 14:03:24.127703 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 30 14:03:24.127721 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 30 14:03:24.127738 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 30 14:03:24.127756 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 30 14:03:24.127778 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 30 14:03:24.127796 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 14:03:24.127814 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 14:03:24.127855 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 14:03:24.127873 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 30 14:03:24.127892 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 30 14:03:24.127910 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 30 14:03:24.127928 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 30 14:03:24.127947 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 30 14:03:24.127968 kernel: Zone ranges: Jan 30 14:03:24.127986 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:03:24.128004 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 14:03:24.128022 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 30 14:03:24.128040 kernel: Movable zone start for each node Jan 30 14:03:24.128058 kernel: Early memory node ranges Jan 30 14:03:24.128076 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 30 14:03:24.128094 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 30 14:03:24.128112 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 30 14:03:24.128135 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 30 14:03:24.128153 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 30 14:03:24.128171 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 30 14:03:24.128189 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:03:24.128207 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 30 14:03:24.128225 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 30 14:03:24.128244 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 14:03:24.128262 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 30 14:03:24.128280 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 30 14:03:24.128302 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 14:03:24.128320 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 14:03:24.128338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 14:03:24.128356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:03:24.128373 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 14:03:24.128392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 14:03:24.128410 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:03:24.128428 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 14:03:24.128446 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 30 14:03:24.128469 kernel: Booting paravirtualized kernel on KVM Jan 30 14:03:24.128487 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:03:24.128506 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 14:03:24.128524 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 14:03:24.128542 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 14:03:24.128560 kernel: pcpu-alloc: [0] 0 1 Jan 30 14:03:24.128578 kernel: kvm-guest: PV spinlocks enabled Jan 30 14:03:24.128596 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 14:03:24.128616 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:03:24.128639 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:03:24.128657 kernel: random: crng init done Jan 30 14:03:24.128673 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 14:03:24.128691 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:03:24.128717 kernel: Fallback order for Node 0: 0 Jan 30 14:03:24.128735 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 30 14:03:24.128753 kernel: Policy zone: Normal Jan 30 14:03:24.128771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:03:24.128794 kernel: software IO TLB: area num 2. Jan 30 14:03:24.128811 kernel: Memory: 7513372K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346952K reserved, 0K cma-reserved) Jan 30 14:03:24.128843 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:03:24.128860 kernel: Kernel/User page tables isolation: enabled Jan 30 14:03:24.128879 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 14:03:24.128896 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:03:24.128913 kernel: Dynamic Preempt: voluntary Jan 30 14:03:24.128931 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:03:24.128952 kernel: rcu: RCU event tracing is enabled. Jan 30 14:03:24.128989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:03:24.129009 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:03:24.129028 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:03:24.129051 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:03:24.129070 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:03:24.129090 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:03:24.129109 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 30 14:03:24.129128 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:03:24.129148 kernel: Console: colour dummy device 80x25 Jan 30 14:03:24.129171 kernel: printk: console [ttyS0] enabled Jan 30 14:03:24.129191 kernel: ACPI: Core revision 20230628 Jan 30 14:03:24.129211 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:03:24.129229 kernel: x2apic enabled Jan 30 14:03:24.129249 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 14:03:24.129269 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 30 14:03:24.129288 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 14:03:24.129308 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 30 14:03:24.129332 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 30 14:03:24.129351 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 30 14:03:24.129371 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:03:24.129390 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 14:03:24.129410 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 14:03:24.129429 kernel: Spectre V2 : Mitigation: IBRS Jan 30 14:03:24.129448 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:03:24.129468 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 14:03:24.129487 kernel: RETBleed: Mitigation: IBRS Jan 30 14:03:24.129511 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:03:24.129530 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 30 14:03:24.129550 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:03:24.129570 kernel: MDS: Mitigation: Clear CPU buffers Jan 30 14:03:24.129589 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 14:03:24.129609 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:03:24.129627 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:03:24.129647 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:03:24.129667 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:03:24.129690 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 30 14:03:24.129716 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:03:24.129736 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:03:24.129755 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:03:24.129774 kernel: landlock: Up and running. Jan 30 14:03:24.129793 kernel: SELinux: Initializing. Jan 30 14:03:24.129813 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.129851 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.129866 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 30 14:03:24.129886 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:03:24.129902 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:03:24.129918 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:03:24.129934 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 30 14:03:24.129950 kernel: signal: max sigframe size: 1776 Jan 30 14:03:24.129968 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:03:24.129987 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:03:24.130004 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 14:03:24.130022 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:03:24.130045 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:03:24.130064 kernel: .... node #0, CPUs: #1 Jan 30 14:03:24.130083 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 30 14:03:24.130101 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 14:03:24.130118 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:03:24.130136 kernel: smpboot: Max logical packages: 1 Jan 30 14:03:24.130154 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 30 14:03:24.130171 kernel: devtmpfs: initialized Jan 30 14:03:24.130193 kernel: x86/mm: Memory block size: 128MB Jan 30 14:03:24.130211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 30 14:03:24.130229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:03:24.130245 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:03:24.130264 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:03:24.130282 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:03:24.130300 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:03:24.130319 kernel: audit: type=2000 audit(1738245802.743:1): state=initialized audit_enabled=0 res=1 Jan 30 14:03:24.130337 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:03:24.130359 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:03:24.130378 kernel: cpuidle: using governor menu Jan 30 14:03:24.130394 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:03:24.130413 kernel: dca service started, version 1.12.1 Jan 30 14:03:24.130431 kernel: PCI: Using configuration type 1 for base access Jan 30 14:03:24.130449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:03:24.130467 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:03:24.130485 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:03:24.130504 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:03:24.130526 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:03:24.130545 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:03:24.130564 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:03:24.130582 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:03:24.130601 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:03:24.130620 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 30 14:03:24.130638 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 14:03:24.130657 kernel: ACPI: Interpreter enabled Jan 30 14:03:24.130673 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 14:03:24.130703 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:03:24.130721 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:03:24.130740 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 14:03:24.130758 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 30 14:03:24.130777 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:03:24.131093 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:03:24.131296 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 30 14:03:24.131496 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 30 14:03:24.131520 kernel: PCI host bridge to bus 0000:00 Jan 30 14:03:24.131724 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:03:24.131913 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:03:24.132081 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:03:24.132244 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 30 14:03:24.132409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:03:24.132614 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 30 14:03:24.132848 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 30 14:03:24.133046 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 30 14:03:24.133236 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 30 14:03:24.133480 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 30 14:03:24.133668 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 30 14:03:24.133882 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 30 14:03:24.134074 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:03:24.134257 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 30 14:03:24.134436 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 30 14:03:24.134623 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 14:03:24.134870 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 30 14:03:24.135070 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 30 14:03:24.135102 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 14:03:24.135123 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 14:03:24.135142 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 14:03:24.135162 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 14:03:24.135182 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 30 14:03:24.135201 kernel: iommu: Default domain type: Translated Jan 30 14:03:24.135221 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:03:24.135241 kernel: efivars: Registered efivars operations Jan 30 14:03:24.135260 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:03:24.135284 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:03:24.135304 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 30 14:03:24.135324 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 30 14:03:24.135344 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 30 14:03:24.135362 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 30 14:03:24.135382 kernel: vgaarb: loaded Jan 30 14:03:24.135402 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 14:03:24.135422 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:03:24.135442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:03:24.135465 kernel: pnp: PnP ACPI init Jan 30 14:03:24.135484 kernel: pnp: PnP ACPI: found 7 devices Jan 30 14:03:24.135504 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:03:24.135525 kernel: NET: Registered PF_INET protocol family Jan 30 14:03:24.135545 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 14:03:24.135565 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 14:03:24.135585 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:03:24.135603 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:03:24.135623 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 14:03:24.135647 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 14:03:24.135667 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.135687 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 14:03:24.135724 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:03:24.135744 kernel: NET: Registered PF_XDP protocol family Jan 30 14:03:24.135985 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:03:24.136153 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:03:24.136320 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:03:24.136492 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 30 14:03:24.136683 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 14:03:24.136718 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:03:24.136739 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 14:03:24.136759 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 30 14:03:24.136779 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 14:03:24.136800 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 30 14:03:24.136845 kernel: clocksource: Switched to clocksource tsc Jan 30 14:03:24.136869 kernel: Initialise system trusted keyrings Jan 30 14:03:24.136885 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 14:03:24.136903 kernel: Key type asymmetric registered Jan 30 14:03:24.136919 kernel: Asymmetric key parser 'x509' registered Jan 30 14:03:24.136937 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:03:24.136954 kernel: io scheduler mq-deadline registered Jan 30 14:03:24.136972 kernel: io scheduler kyber registered Jan 30 14:03:24.136990 kernel: io scheduler bfq registered Jan 30 14:03:24.137008 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:03:24.137031 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 30 14:03:24.137234 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 30 14:03:24.137258 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 30 14:03:24.137445 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 30 14:03:24.137468 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 30 14:03:24.137655 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 30 14:03:24.137681 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:03:24.137710 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:03:24.137730 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 14:03:24.137755 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 30 14:03:24.137776 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 30 14:03:24.138044 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 30 14:03:24.138075 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 14:03:24.138095 kernel: i8042: Warning: Keylock active Jan 30 14:03:24.138114 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 14:03:24.138134 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 14:03:24.138315 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 30 14:03:24.138497 kernel: rtc_cmos 00:00: registered as rtc0 Jan 30 14:03:24.138668 kernel: rtc_cmos 00:00: setting system clock to 2025-01-30T14:03:23 UTC (1738245803) Jan 30 14:03:24.138892 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 30 14:03:24.138915 kernel: intel_pstate: CPU model not supported Jan 30 14:03:24.138935 kernel: pstore: Using crash dump compression: deflate Jan 30 14:03:24.138953 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 14:03:24.138971 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:03:24.138990 kernel: Segment Routing with IPv6 Jan 30 14:03:24.139025 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:03:24.139045 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:03:24.139065 kernel: Key type dns_resolver registered Jan 30 14:03:24.139083 kernel: IPI shorthand broadcast: enabled Jan 30 14:03:24.139102 kernel: sched_clock: Marking stable (880004826, 165202055)->(1157908919, -112702038) Jan 30 14:03:24.139119 kernel: registered taskstats version 1 Jan 30 14:03:24.139137 kernel: Loading compiled-in X.509 certificates Jan 30 14:03:24.139157 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 14:03:24.139175 kernel: Key type .fscrypt registered Jan 30 14:03:24.139198 kernel: Key type fscrypt-provisioning registered Jan 30 14:03:24.139218 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:03:24.139238 kernel: ima: No architecture policies found Jan 30 14:03:24.139256 kernel: clk: Disabling unused clocks Jan 30 14:03:24.139273 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 14:03:24.139290 kernel: Write protecting the kernel read-only data: 36864k Jan 30 14:03:24.139310 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 14:03:24.139328 kernel: Run /init as init process Jan 30 14:03:24.139351 kernel: with arguments: Jan 30 14:03:24.139370 kernel: /init Jan 30 14:03:24.139388 kernel: with environment: Jan 30 14:03:24.139406 kernel: HOME=/ Jan 30 14:03:24.139423 kernel: TERM=linux Jan 30 14:03:24.139442 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:03:24.139460 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 14:03:24.139485 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:03:24.139514 systemd[1]: Detected virtualization google. Jan 30 14:03:24.139535 systemd[1]: Detected architecture x86-64. Jan 30 14:03:24.139552 systemd[1]: Running in initrd. Jan 30 14:03:24.139571 systemd[1]: No hostname configured, using default hostname. Jan 30 14:03:24.139590 systemd[1]: Hostname set to . Jan 30 14:03:24.139610 systemd[1]: Initializing machine ID from random generator. Jan 30 14:03:24.139629 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:03:24.139649 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:03:24.139674 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:03:24.139702 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:03:24.139724 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:03:24.139745 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:03:24.139763 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:03:24.139786 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:03:24.139810 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:03:24.139853 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:03:24.139873 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:03:24.139924 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:03:24.139944 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:03:24.139963 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:03:24.139981 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:03:24.140005 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:03:24.140022 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:03:24.140046 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:03:24.140071 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:03:24.140095 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:03:24.140117 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:03:24.140136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:03:24.140157 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:03:24.140182 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:03:24.140203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:03:24.140225 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:03:24.140245 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:03:24.140266 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:03:24.140287 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:03:24.140308 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:03:24.140372 systemd-journald[183]: Collecting audit messages is disabled. Jan 30 14:03:24.140422 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:03:24.140443 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:03:24.140463 systemd-journald[183]: Journal started Jan 30 14:03:24.140508 systemd-journald[183]: Runtime Journal (/run/log/journal/adee386b4e8a495ca0d8f2d7e983561a) is 8.0M, max 148.7M, 140.7M free. Jan 30 14:03:24.145934 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:03:24.150957 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:03:24.152166 systemd-modules-load[184]: Inserted module 'overlay' Jan 30 14:03:24.170118 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:03:24.173393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:03:24.185799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:24.198626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:03:24.204859 kernel: Bridge firewalling registered Jan 30 14:03:24.205993 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 30 14:03:24.213139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:03:24.216805 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:03:24.225424 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:03:24.230353 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:03:24.242068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:03:24.244080 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:03:24.273146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:03:24.273802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:03:24.280047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:03:24.291139 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:24.300106 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:03:24.329795 systemd-resolved[216]: Positive Trust Anchors: Jan 30 14:03:24.330417 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:03:24.330648 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:03:24.351121 dracut-cmdline[218]: dracut-dracut-053 Jan 30 14:03:24.351121 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 14:03:24.337348 systemd-resolved[216]: Defaulting to hostname 'linux'. Jan 30 14:03:24.339800 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:03:24.377097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:03:24.444875 kernel: SCSI subsystem initialized Jan 30 14:03:24.455877 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:03:24.467875 kernel: iscsi: registered transport (tcp) Jan 30 14:03:24.492348 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:03:24.492446 kernel: QLogic iSCSI HBA Driver Jan 30 14:03:24.545620 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:03:24.560054 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:03:24.590333 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:03:24.590425 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:03:24.592575 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:03:24.637881 kernel: raid6: avx2x4 gen() 17758 MB/s Jan 30 14:03:24.654875 kernel: raid6: avx2x2 gen() 17871 MB/s Jan 30 14:03:24.672416 kernel: raid6: avx2x1 gen() 13807 MB/s Jan 30 14:03:24.672487 kernel: raid6: using algorithm avx2x2 gen() 17871 MB/s Jan 30 14:03:24.690306 kernel: raid6: .... xor() 17437 MB/s, rmw enabled Jan 30 14:03:24.690376 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:03:24.713860 kernel: xor: automatically using best checksumming function avx Jan 30 14:03:24.895855 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:03:24.909156 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:03:24.916078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:03:24.947673 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 30 14:03:24.954695 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:03:24.965186 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:03:24.995220 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 30 14:03:25.033511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:03:25.044120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:03:25.136656 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:03:25.150145 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:03:25.189908 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:03:25.204524 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:03:25.232979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:03:25.254995 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:03:25.293289 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:03:25.293349 kernel: scsi host0: Virtio SCSI HBA Jan 30 14:03:25.292147 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:03:25.346009 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 30 14:03:25.339897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:03:25.340125 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:25.443979 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:03:25.444029 kernel: AES CTR mode by8 optimization enabled Jan 30 14:03:25.444055 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 30 14:03:25.489981 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 30 14:03:25.490265 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 30 14:03:25.490501 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 30 14:03:25.490722 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 30 14:03:25.490974 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:03:25.490999 kernel: GPT:17805311 != 25165823 Jan 30 14:03:25.491020 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:03:25.491041 kernel: GPT:17805311 != 25165823 Jan 30 14:03:25.491062 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:03:25.491084 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.491116 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 30 14:03:25.365723 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:03:25.375361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:03:25.375623 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:25.396038 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:03:25.441317 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:03:25.494143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:03:25.569508 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (457) Jan 30 14:03:25.575484 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 30 14:03:25.596029 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (454) Jan 30 14:03:25.607335 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:25.620567 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 30 14:03:25.632814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 30 14:03:25.646205 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 30 14:03:25.673758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 14:03:25.699072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:03:25.732077 disk-uuid[538]: Primary Header is updated. Jan 30 14:03:25.732077 disk-uuid[538]: Secondary Entries is updated. Jan 30 14:03:25.732077 disk-uuid[538]: Secondary Header is updated. Jan 30 14:03:25.740184 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:03:25.771088 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.785891 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.808849 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:25.817633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:26.805851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 14:03:26.807926 disk-uuid[539]: The operation has completed successfully. Jan 30 14:03:26.885447 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:03:26.885595 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:03:26.923106 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:03:26.943379 sh[565]: Success Jan 30 14:03:26.955969 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 14:03:27.052806 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:03:27.059998 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:03:27.090487 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:03:27.129933 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 14:03:27.130028 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:27.130056 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:03:27.139392 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:03:27.146253 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:03:27.184879 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:03:27.192064 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:03:27.193077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:03:27.203097 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:03:27.271247 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:27.271291 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:27.271315 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:03:27.271336 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:03:27.271357 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:03:27.271349 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:03:27.302065 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:27.283612 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:03:27.300734 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:03:27.338109 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:03:27.446659 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:03:27.457325 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:03:27.545477 systemd-networkd[748]: lo: Link UP Jan 30 14:03:27.545490 systemd-networkd[748]: lo: Gained carrier Jan 30 14:03:27.546504 ignition[655]: Ignition 2.19.0 Jan 30 14:03:27.548646 systemd-networkd[748]: Enumeration completed Jan 30 14:03:27.546517 ignition[655]: Stage: fetch-offline Jan 30 14:03:27.549562 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:03:27.546590 ignition[655]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.549570 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:03:27.546610 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.550660 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:03:27.546795 ignition[655]: parsed url from cmdline: "" Jan 30 14:03:27.551674 systemd-networkd[748]: eth0: Link UP Jan 30 14:03:27.546804 ignition[655]: no config URL provided Jan 30 14:03:27.551680 systemd-networkd[748]: eth0: Gained carrier Jan 30 14:03:27.546815 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:03:27.551691 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:03:27.546885 ignition[655]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:03:27.567951 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.55/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 14:03:27.547665 ignition[655]: failed to fetch config: resource requires networking Jan 30 14:03:27.576289 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:03:27.548039 ignition[655]: Ignition finished successfully Jan 30 14:03:27.594803 systemd[1]: Reached target network.target - Network. Jan 30 14:03:27.656594 ignition[757]: Ignition 2.19.0 Jan 30 14:03:27.615068 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:03:27.656605 ignition[757]: Stage: fetch Jan 30 14:03:27.668224 unknown[757]: fetched base config from "system" Jan 30 14:03:27.656809 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.668236 unknown[757]: fetched base config from "system" Jan 30 14:03:27.656836 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.668246 unknown[757]: fetched user config from "gcp" Jan 30 14:03:27.656980 ignition[757]: parsed url from cmdline: "" Jan 30 14:03:27.670812 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:03:27.656988 ignition[757]: no config URL provided Jan 30 14:03:27.688199 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:03:27.656997 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:03:27.735854 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:03:27.657013 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:03:27.755059 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:03:27.657042 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 30 14:03:27.787343 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:03:27.661703 ignition[757]: GET result: OK Jan 30 14:03:27.807748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:03:27.661807 ignition[757]: parsing config with SHA512: 865f2328c43cc8f208358fe9f7a6efecb7b9290e98e29d410e7d6711daa19e5d648821b43d6be21db53b778acff0c0e0cca52fbbeedc7d8468a46052acdab04e Jan 30 14:03:27.825214 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:03:27.668947 ignition[757]: fetch: fetch complete Jan 30 14:03:27.845114 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:03:27.668955 ignition[757]: fetch: fetch passed Jan 30 14:03:27.851192 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:03:27.669013 ignition[757]: Ignition finished successfully Jan 30 14:03:27.868226 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:03:27.733287 ignition[763]: Ignition 2.19.0 Jan 30 14:03:27.889298 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:03:27.733296 ignition[763]: Stage: kargs Jan 30 14:03:27.733507 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.733519 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.734632 ignition[763]: kargs: kargs passed Jan 30 14:03:27.734691 ignition[763]: Ignition finished successfully Jan 30 14:03:27.775144 ignition[768]: Ignition 2.19.0 Jan 30 14:03:27.775153 ignition[768]: Stage: disks Jan 30 14:03:27.775362 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:27.775374 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:27.776368 ignition[768]: disks: disks passed Jan 30 14:03:27.776437 ignition[768]: Ignition finished successfully Jan 30 14:03:27.954938 systemd-fsck[777]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 14:03:28.139014 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:03:28.172032 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:03:28.291874 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 14:03:28.292387 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:03:28.293308 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:03:28.325976 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:03:28.348969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:03:28.357569 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:03:28.357658 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:03:28.357704 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:03:28.368869 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (785) Jan 30 14:03:28.389789 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:28.389909 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:28.389938 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:03:28.416991 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:03:28.417078 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:03:28.460219 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:03:28.469217 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:03:28.493154 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:03:28.627452 initrd-setup-root[809]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:03:28.639015 initrd-setup-root[816]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:03:28.648966 initrd-setup-root[823]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:03:28.659987 initrd-setup-root[830]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:03:28.821966 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:03:28.826986 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:03:28.847106 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:03:28.877640 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:03:28.895114 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:28.928911 ignition[897]: INFO : Ignition 2.19.0 Jan 30 14:03:28.928911 ignition[897]: INFO : Stage: mount Jan 30 14:03:28.952029 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:28.952029 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:28.952029 ignition[897]: INFO : mount: mount passed Jan 30 14:03:28.952029 ignition[897]: INFO : Ignition finished successfully Jan 30 14:03:28.933284 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:03:28.941155 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:03:28.976396 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:03:28.997188 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:03:29.081031 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (910) Jan 30 14:03:29.081081 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 14:03:29.081098 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:03:29.081112 kernel: BTRFS info (device sda6): using free space tree Jan 30 14:03:29.096558 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 14:03:29.096676 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 14:03:29.101029 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:03:29.102375 systemd-networkd[748]: eth0: Gained IPv6LL Jan 30 14:03:29.141324 ignition[927]: INFO : Ignition 2.19.0 Jan 30 14:03:29.141324 ignition[927]: INFO : Stage: files Jan 30 14:03:29.158015 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:29.158015 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:29.158015 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:03:29.158015 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:03:29.158015 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:03:29.158015 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 14:03:29.154017 unknown[927]: wrote ssh authorized keys file for user: core Jan 30 14:03:31.376266 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:03:31.621007 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:31.638020 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 14:03:31.935899 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:03:32.453218 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 14:03:32.453218 ignition[927]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:03:32.492050 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:03:32.492050 ignition[927]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:03:32.492050 ignition[927]: INFO : files: files passed Jan 30 14:03:32.492050 ignition[927]: INFO : Ignition finished successfully Jan 30 14:03:32.458241 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:03:32.478113 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:03:32.508677 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:03:32.520606 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:03:32.708051 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:03:32.708051 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:03:32.520751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:03:32.746156 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:03:32.606424 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:03:32.632478 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:03:32.654078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:03:32.742917 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:03:32.743057 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:03:32.757282 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:03:32.782073 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:03:32.802157 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:03:32.809071 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:03:32.875619 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:03:32.904146 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:03:32.924887 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:03:32.939352 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:03:32.961340 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:03:32.980332 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:03:32.980541 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:03:33.008390 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:03:33.035311 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:03:33.045413 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:03:33.060366 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:03:33.078388 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:03:33.098420 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:03:33.116445 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:03:33.133403 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:03:33.154427 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:03:33.171395 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:03:33.188434 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:03:33.188672 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:03:33.219416 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:03:33.229436 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:03:33.248358 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:03:33.248575 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:03:33.268346 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:03:33.268542 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:03:33.307337 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:03:33.307573 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:03:33.316451 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:03:33.316634 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:03:33.385050 ignition[979]: INFO : Ignition 2.19.0 Jan 30 14:03:33.385050 ignition[979]: INFO : Stage: umount Jan 30 14:03:33.385050 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:03:33.385050 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 30 14:03:33.343255 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:03:33.422260 ignition[979]: INFO : umount: umount passed Jan 30 14:03:33.422260 ignition[979]: INFO : Ignition finished successfully Jan 30 14:03:33.406150 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:03:33.407189 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:03:33.407403 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:03:33.476363 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:03:33.476543 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:03:33.511065 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:03:33.512225 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:03:33.512343 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:03:33.534903 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:03:33.535050 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:03:33.545411 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:03:33.545556 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:03:33.560569 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:03:33.560634 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:03:33.587218 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:03:33.587300 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:03:33.612245 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:03:33.612325 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:03:33.639232 systemd[1]: Stopped target network.target - Network. Jan 30 14:03:33.648238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:03:33.648332 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:03:33.674197 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:03:33.682201 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:03:33.685947 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:03:33.698256 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:03:33.724163 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:03:33.733301 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:03:33.733363 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:03:33.749340 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:03:33.749402 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:03:33.784198 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:03:33.784289 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:03:33.793287 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:03:33.793371 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:03:33.810301 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:03:33.810390 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:03:33.848432 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:03:33.853951 systemd-networkd[748]: eth0: DHCPv6 lease lost Jan 30 14:03:33.868250 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:03:33.886597 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:03:33.886742 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:03:33.913815 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:03:33.914128 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:03:33.924162 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:03:33.924223 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:03:33.963006 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:03:33.983995 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:03:33.984126 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:03:33.995242 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:03:33.995328 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:03:34.005294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:03:34.005371 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:03:34.033211 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:03:34.033305 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:03:34.054324 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:03:34.072663 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:03:34.072875 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:03:34.100313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:03:34.100467 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:03:34.111251 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:03:34.111304 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:03:34.138180 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:03:34.138268 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:03:34.167331 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:03:34.167416 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:03:34.194310 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:03:34.194413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:03:34.240207 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:03:34.515039 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 30 14:03:34.243186 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:03:34.243274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:03:34.292222 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 14:03:34.292323 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:03:34.314212 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:03:34.314291 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:03:34.336195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:03:34.336280 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:34.345916 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:03:34.346059 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:03:34.365655 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:03:34.365802 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:03:34.384440 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:03:34.417081 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:03:34.460366 systemd[1]: Switching root. Jan 30 14:03:34.665994 systemd-journald[183]: Journal stopped Jan 30 14:03:37.301515 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:03:37.301580 kernel: SELinux: policy capability open_perms=1 Jan 30 14:03:37.301601 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:03:37.301619 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:03:37.301635 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:03:37.301652 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:03:37.301670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:03:37.301692 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:03:37.301710 kernel: audit: type=1403 audit(1738245815.132:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:03:37.301730 systemd[1]: Successfully loaded SELinux policy in 90.613ms. Jan 30 14:03:37.301750 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.123ms. Jan 30 14:03:37.301771 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:03:37.301790 systemd[1]: Detected virtualization google. Jan 30 14:03:37.301808 systemd[1]: Detected architecture x86-64. Jan 30 14:03:37.301910 systemd[1]: Detected first boot. Jan 30 14:03:37.301933 systemd[1]: Initializing machine ID from random generator. Jan 30 14:03:37.301953 zram_generator::config[1020]: No configuration found. Jan 30 14:03:37.301973 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:03:37.301993 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:03:37.302018 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:03:37.302040 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:03:37.302060 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:03:37.302080 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:03:37.302099 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:03:37.302119 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:03:37.302140 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:03:37.302173 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:03:37.302193 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:03:37.302213 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:03:37.302233 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:03:37.302254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:03:37.302277 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:03:37.302299 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:03:37.302321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:03:37.302347 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:03:37.302368 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:03:37.302387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:03:37.302406 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:03:37.302425 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:03:37.302448 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:03:37.302474 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:03:37.302495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:03:37.302535 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:03:37.302561 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:03:37.302583 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:03:37.302604 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:03:37.302625 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:03:37.302648 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:03:37.302671 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:03:37.302694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:03:37.302722 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:03:37.302747 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:03:37.302766 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:03:37.302787 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:03:37.302807 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:03:37.302852 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:03:37.302873 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:03:37.302894 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:03:37.302916 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:03:37.302940 systemd[1]: Reached target machines.target - Containers. Jan 30 14:03:37.302960 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:03:37.302984 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:03:37.303005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:03:37.303035 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:03:37.303059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:03:37.303080 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:03:37.303099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:03:37.303119 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:03:37.303141 kernel: ACPI: bus type drm_connector registered Jan 30 14:03:37.303173 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:03:37.303197 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:03:37.303338 kernel: fuse: init (API version 7.39) Jan 30 14:03:37.303386 kernel: loop: module loaded Jan 30 14:03:37.303411 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:03:37.303437 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:03:37.303462 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:03:37.303486 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:03:37.303512 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:03:37.303538 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:03:37.303610 systemd-journald[1107]: Collecting audit messages is disabled. Jan 30 14:03:37.303661 systemd-journald[1107]: Journal started Jan 30 14:03:37.303708 systemd-journald[1107]: Runtime Journal (/run/log/journal/a56f750df5584647b57b031d1988ce0e) is 8.0M, max 148.7M, 140.7M free. Jan 30 14:03:36.078292 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:03:36.108624 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 14:03:36.109254 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:03:37.321889 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:03:37.354945 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:03:37.389797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:03:37.389928 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:03:37.389960 systemd[1]: Stopped verity-setup.service. Jan 30 14:03:37.428864 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:03:37.439874 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:03:37.451531 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:03:37.462366 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:03:37.472286 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:03:37.483313 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:03:37.494280 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:03:37.504294 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:03:37.514448 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:03:37.526502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:03:37.538555 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:03:37.538849 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:03:37.550496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:03:37.550726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:03:37.562479 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:03:37.562716 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:03:37.573425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:03:37.573658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:03:37.585425 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:03:37.585663 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:03:37.596449 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:03:37.596686 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:03:37.607461 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:03:37.617388 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:03:37.629447 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:03:37.641444 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:03:37.666103 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:03:37.688053 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:03:37.707079 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:03:37.717040 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:03:37.717148 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:03:37.728365 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:03:37.752213 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:03:37.773159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:03:37.783172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:03:37.791114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:03:37.809636 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:03:37.818603 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:03:37.829276 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:03:37.839092 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:03:37.843965 systemd-journald[1107]: Time spent on flushing to /var/log/journal/a56f750df5584647b57b031d1988ce0e is 88.988ms for 929 entries. Jan 30 14:03:37.843965 systemd-journald[1107]: System Journal (/var/log/journal/a56f750df5584647b57b031d1988ce0e) is 8.0M, max 584.8M, 576.8M free. Jan 30 14:03:37.972380 systemd-journald[1107]: Received client request to flush runtime journal. Jan 30 14:03:37.972455 kernel: loop0: detected capacity change from 0 to 205544 Jan 30 14:03:37.853689 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:03:37.872113 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:03:37.894085 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:03:37.915235 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:03:37.931872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:03:37.949256 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:03:37.963973 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:03:37.976864 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:03:37.988539 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:03:38.007648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:03:38.018547 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Jan 30 14:03:38.018584 systemd-tmpfiles[1140]: ACLs are not supported, ignoring. Jan 30 14:03:38.037980 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:03:38.051373 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:03:38.060860 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:03:38.082017 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:03:38.098317 kernel: loop1: detected capacity change from 0 to 54824 Jan 30 14:03:38.107142 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:03:38.118665 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:03:38.121903 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:03:38.137642 udevadm[1141]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 14:03:38.189856 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 14:03:38.209291 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:03:38.237109 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:03:38.302698 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Jan 30 14:03:38.302734 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Jan 30 14:03:38.320446 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:03:38.323848 kernel: loop3: detected capacity change from 0 to 142488 Jan 30 14:03:38.433864 kernel: loop4: detected capacity change from 0 to 205544 Jan 30 14:03:38.480851 kernel: loop5: detected capacity change from 0 to 54824 Jan 30 14:03:38.521030 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 14:03:38.581854 kernel: loop7: detected capacity change from 0 to 142488 Jan 30 14:03:38.644693 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 30 14:03:38.646735 (sd-merge)[1166]: Merged extensions into '/usr'. Jan 30 14:03:38.658962 systemd[1]: Reloading requested from client PID 1139 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:03:38.658999 systemd[1]: Reloading... Jan 30 14:03:38.813996 zram_generator::config[1188]: No configuration found. Jan 30 14:03:39.078434 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:03:39.087844 ldconfig[1133]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:03:39.194304 systemd[1]: Reloading finished in 534 ms. Jan 30 14:03:39.233106 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:03:39.243718 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:03:39.269180 systemd[1]: Starting ensure-sysext.service... Jan 30 14:03:39.285904 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:03:39.302972 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:03:39.302999 systemd[1]: Reloading... Jan 30 14:03:39.348226 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:03:39.348940 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:03:39.353355 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:03:39.353929 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 30 14:03:39.354065 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Jan 30 14:03:39.364666 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:03:39.364690 systemd-tmpfiles[1233]: Skipping /boot Jan 30 14:03:39.392359 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:03:39.392388 systemd-tmpfiles[1233]: Skipping /boot Jan 30 14:03:39.470855 zram_generator::config[1259]: No configuration found. Jan 30 14:03:39.608483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:03:39.674253 systemd[1]: Reloading finished in 370 ms. Jan 30 14:03:39.694869 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:03:39.712662 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:03:39.736264 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:03:39.753973 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:03:39.780290 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:03:39.799393 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:03:39.816993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:03:39.838312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:03:39.841879 augenrules[1322]: No rules Jan 30 14:03:39.851658 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:03:39.871249 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:03:39.871896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:03:39.884285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:03:39.902038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:03:39.903856 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 30 14:03:39.921754 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:03:39.932168 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:03:39.939590 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:03:39.949059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:03:39.953534 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:03:39.966887 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:03:39.967181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:03:39.978593 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:03:39.991921 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:03:40.003982 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:03:40.004305 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:03:40.016876 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:03:40.017420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:03:40.046632 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:03:40.075925 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:03:40.114396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:03:40.116016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:03:40.125226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:03:40.143186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:03:40.165080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:03:40.182106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:03:40.202271 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 14:03:40.211160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:03:40.242783 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:03:40.253062 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:03:40.277095 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:03:40.286999 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:03:40.287060 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:03:40.291086 systemd[1]: Finished ensure-sysext.service. Jan 30 14:03:40.291754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:03:40.292025 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:03:40.292604 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:03:40.293874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:03:40.317116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:03:40.317920 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:03:40.335316 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 14:03:40.337687 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:03:40.339182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:03:40.350589 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:03:40.350741 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Jan 30 14:03:40.350777 kernel: ACPI: button: Sleep Button [SLPF] Jan 30 14:03:40.359850 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 30 14:03:40.384673 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:03:40.386205 systemd-resolved[1314]: Positive Trust Anchors: Jan 30 14:03:40.386367 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:03:40.386440 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:03:40.414678 systemd-resolved[1314]: Defaulting to hostname 'linux'. Jan 30 14:03:40.424804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:03:40.437684 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 14:03:40.454511 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 14:03:40.465176 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:03:40.486270 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 30 14:03:40.497019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:03:40.498003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:03:40.528853 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1338) Jan 30 14:03:40.543327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:03:40.570871 kernel: EDAC MC: Ver: 3.0.0 Jan 30 14:03:40.600863 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 14:03:40.649970 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 30 14:03:40.668170 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 30 14:03:40.675127 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:03:40.681579 systemd-networkd[1371]: lo: Link UP Jan 30 14:03:40.681592 systemd-networkd[1371]: lo: Gained carrier Jan 30 14:03:40.684266 systemd-networkd[1371]: Enumeration completed Jan 30 14:03:40.684427 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:03:40.684638 systemd[1]: Reached target network.target - Network. Jan 30 14:03:40.686451 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:03:40.686465 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:03:40.689226 systemd-networkd[1371]: eth0: Link UP Jan 30 14:03:40.689235 systemd-networkd[1371]: eth0: Gained carrier Jan 30 14:03:40.689265 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:03:40.694978 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:03:40.698155 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:03:40.710200 systemd-networkd[1371]: eth0: DHCPv4 address 10.128.0.55/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 30 14:03:40.722000 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:03:40.734945 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:03:40.740437 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:03:40.768916 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:03:40.783778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:03:40.818191 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:03:40.829377 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:03:40.839079 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:03:40.849186 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:03:40.860116 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:03:40.872286 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:03:40.882253 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:03:40.894083 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:03:40.905043 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:03:40.905118 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:03:40.914033 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:03:40.923814 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:03:40.935973 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:03:40.949454 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:03:40.964112 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:03:40.977042 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:03:40.987330 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:03:40.989033 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:03:40.997045 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:03:41.006123 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:03:41.006178 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:03:41.013061 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:03:41.033140 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:03:41.055178 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:03:41.086274 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:03:41.106920 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:03:41.116990 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:03:41.119615 jq[1422]: false Jan 30 14:03:41.129067 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:03:41.140432 coreos-metadata[1420]: Jan 30 14:03:41.140 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 30 14:03:41.147989 coreos-metadata[1420]: Jan 30 14:03:41.147 INFO Fetch successful Jan 30 14:03:41.147989 coreos-metadata[1420]: Jan 30 14:03:41.147 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 30 14:03:41.148190 coreos-metadata[1420]: Jan 30 14:03:41.148 INFO Fetch successful Jan 30 14:03:41.148844 coreos-metadata[1420]: Jan 30 14:03:41.148 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 30 14:03:41.150087 coreos-metadata[1420]: Jan 30 14:03:41.149 INFO Fetch successful Jan 30 14:03:41.150087 coreos-metadata[1420]: Jan 30 14:03:41.149 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 30 14:03:41.150265 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 14:03:41.152853 coreos-metadata[1420]: Jan 30 14:03:41.151 INFO Fetch successful Jan 30 14:03:41.166056 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:03:41.182919 extend-filesystems[1425]: Found loop4 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found loop5 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found loop6 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found loop7 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda1 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda2 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda3 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found usr Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda4 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda6 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda7 Jan 30 14:03:41.184413 extend-filesystems[1425]: Found sda9 Jan 30 14:03:41.184413 extend-filesystems[1425]: Checking size of /dev/sda9 Jan 30 14:03:41.343228 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 30 14:03:41.343290 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 30 14:03:41.343319 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1346) Jan 30 14:03:41.184071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:03:41.343978 extend-filesystems[1425]: Resized partition /dev/sda9 Jan 30 14:03:41.203635 dbus-daemon[1421]: [system] SELinux support is enabled Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: ---------------------------------------------------- Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: available at https://www.nwtime.org/support Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: ---------------------------------------------------- Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: proto: precision = 0.081 usec (-23) Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: basedate set to 2025-01-17 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: gps base set to 2025-01-19 (week 2350) Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Listen normally on 3 eth0 10.128.0.55:123 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:37%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:37%2#123 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:37%2 Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:03:41.374907 ntpd[1428]: 30 Jan 14:03:41 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:03:41.207530 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:03:41.384316 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:03:41.384316 extend-filesystems[1444]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 30 14:03:41.384316 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 30 14:03:41.384316 extend-filesystems[1444]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 30 14:03:41.207281 dbus-daemon[1421]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1371 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 14:03:41.221113 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:03:41.480672 extend-filesystems[1425]: Resized filesystem in /dev/sda9 Jan 30 14:03:41.248175 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 30 14:03:41.303702 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 30 14:03:41.248208 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:03:41.304565 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:03:41.248223 ntpd[1428]: ---------------------------------------------------- Jan 30 14:03:41.313151 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:03:41.488996 update_engine[1450]: I20250130 14:03:41.434808 1450 main.cc:92] Flatcar Update Engine starting Jan 30 14:03:41.488996 update_engine[1450]: I20250130 14:03:41.441056 1450 update_check_scheduler.cc:74] Next update check in 8m48s Jan 30 14:03:41.248239 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:03:41.340015 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:03:41.489548 jq[1453]: true Jan 30 14:03:41.248252 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:03:41.355119 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:03:41.248267 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 30 14:03:41.372774 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:03:41.248280 ntpd[1428]: available at https://www.nwtime.org/support Jan 30 14:03:41.385424 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:03:41.248294 ntpd[1428]: ---------------------------------------------------- Jan 30 14:03:41.386926 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:03:41.252640 ntpd[1428]: proto: precision = 0.081 usec (-23) Jan 30 14:03:41.387457 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:03:41.253160 ntpd[1428]: basedate set to 2025-01-17 Jan 30 14:03:41.387718 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:03:41.253182 ntpd[1428]: gps base set to 2025-01-19 (week 2350) Jan 30 14:03:41.399721 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:03:41.265152 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:03:41.399994 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:03:41.265231 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:03:41.417483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:03:41.265494 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:03:41.417744 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:03:41.265560 ntpd[1428]: Listen normally on 3 eth0 10.128.0.55:123 Jan 30 14:03:41.265624 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 30 14:03:41.265693 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:37%2#123 flags 0x11 failed: Cannot assign requested address Jan 30 14:03:41.265724 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:37%2#123 Jan 30 14:03:41.265743 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:37%2 Jan 30 14:03:41.265789 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 30 14:03:41.514198 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 14:03:41.279396 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:03:41.514229 systemd-logind[1441]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 30 14:03:41.279443 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:03:41.514268 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:03:41.515526 systemd-logind[1441]: New seat seat0. Jan 30 14:03:41.529336 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:03:41.559377 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:03:41.564244 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 14:03:41.577311 jq[1458]: true Jan 30 14:03:41.608944 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:03:41.614471 tar[1457]: linux-amd64/helm Jan 30 14:03:41.641230 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:03:41.662528 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:03:41.675538 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:03:41.676690 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:03:41.676955 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:03:41.698337 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 14:03:41.708042 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:03:41.708324 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:03:41.730289 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:03:41.768210 bash[1492]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:03:41.778515 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:03:41.800430 systemd[1]: Starting sshkeys.service... Jan 30 14:03:41.865994 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:03:41.886346 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:03:42.030507 systemd-networkd[1371]: eth0: Gained IPv6LL Jan 30 14:03:42.044460 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 14:03:42.049664 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:03:42.053224 dbus-daemon[1421]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1487 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 14:03:42.067120 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 14:03:42.080170 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:03:42.103515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:42.109446 coreos-metadata[1495]: Jan 30 14:03:42.108 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 30 14:03:42.121124 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:03:42.123117 coreos-metadata[1495]: Jan 30 14:03:42.122 INFO Fetch failed with 404: resource not found Jan 30 14:03:42.123117 coreos-metadata[1495]: Jan 30 14:03:42.122 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 30 14:03:42.124251 coreos-metadata[1495]: Jan 30 14:03:42.123 INFO Fetch successful Jan 30 14:03:42.124251 coreos-metadata[1495]: Jan 30 14:03:42.124 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 30 14:03:42.124711 coreos-metadata[1495]: Jan 30 14:03:42.124 INFO Fetch failed with 404: resource not found Jan 30 14:03:42.124711 coreos-metadata[1495]: Jan 30 14:03:42.124 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 30 14:03:42.125300 coreos-metadata[1495]: Jan 30 14:03:42.125 INFO Fetch failed with 404: resource not found Jan 30 14:03:42.125300 coreos-metadata[1495]: Jan 30 14:03:42.125 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 30 14:03:42.128298 coreos-metadata[1495]: Jan 30 14:03:42.126 INFO Fetch successful Jan 30 14:03:42.132214 unknown[1495]: wrote ssh authorized keys file for user: core Jan 30 14:03:42.137106 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 30 14:03:42.152907 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 14:03:42.214564 init.sh[1505]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 30 14:03:42.219411 init.sh[1505]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 30 14:03:42.219411 init.sh[1505]: + /usr/bin/google_instance_setup Jan 30 14:03:42.252870 update-ssh-keys[1512]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:03:42.256746 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:03:42.283032 systemd[1]: Finished sshkeys.service. Jan 30 14:03:42.317580 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:03:42.330080 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:03:42.372604 polkitd[1508]: Started polkitd version 121 Jan 30 14:03:42.399137 polkitd[1508]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 14:03:42.399240 polkitd[1508]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 14:03:42.409398 polkitd[1508]: Finished loading, compiling and executing 2 rules Jan 30 14:03:42.411339 dbus-daemon[1421]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 14:03:42.411616 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 14:03:42.413276 polkitd[1508]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 14:03:42.480092 systemd-hostnamed[1487]: Hostname set to (transient) Jan 30 14:03:42.483963 systemd-resolved[1314]: System hostname changed to 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal'. Jan 30 14:03:42.583315 containerd[1470]: time="2025-01-30T14:03:42.582849215Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:03:42.756859 containerd[1470]: time="2025-01-30T14:03:42.756090233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:03:42.763504 containerd[1470]: time="2025-01-30T14:03:42.763435575Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:03:42.763504 containerd[1470]: time="2025-01-30T14:03:42.763500749Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:03:42.763713 containerd[1470]: time="2025-01-30T14:03:42.763527082Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:03:42.763766 containerd[1470]: time="2025-01-30T14:03:42.763742692Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:03:42.764129 containerd[1470]: time="2025-01-30T14:03:42.763772516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:03:42.764129 containerd[1470]: time="2025-01-30T14:03:42.763901110Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:03:42.764129 containerd[1470]: time="2025-01-30T14:03:42.763927911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:03:42.765591 containerd[1470]: time="2025-01-30T14:03:42.765363432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:03:42.765591 containerd[1470]: time="2025-01-30T14:03:42.765404990Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:03:42.765591 containerd[1470]: time="2025-01-30T14:03:42.765432507Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:03:42.765591 containerd[1470]: time="2025-01-30T14:03:42.765457469Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:03:42.765591 containerd[1470]: time="2025-01-30T14:03:42.765592366Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:03:42.769496 containerd[1470]: time="2025-01-30T14:03:42.769219033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:03:42.769496 containerd[1470]: time="2025-01-30T14:03:42.769476320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:03:42.769653 containerd[1470]: time="2025-01-30T14:03:42.769504436Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:03:42.769699 containerd[1470]: time="2025-01-30T14:03:42.769660661Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:03:42.769783 containerd[1470]: time="2025-01-30T14:03:42.769756821Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:03:42.784816 containerd[1470]: time="2025-01-30T14:03:42.781551809Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:03:42.784816 containerd[1470]: time="2025-01-30T14:03:42.781672338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:03:42.784816 containerd[1470]: time="2025-01-30T14:03:42.781757401Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:03:42.784816 containerd[1470]: time="2025-01-30T14:03:42.781784725Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:03:42.784816 containerd[1470]: time="2025-01-30T14:03:42.781810721Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:03:42.784816 containerd[1470]: time="2025-01-30T14:03:42.782081667Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:03:42.784816 containerd[1470]: time="2025-01-30T14:03:42.782747424Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:03:42.785259 containerd[1470]: time="2025-01-30T14:03:42.784971765Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:03:42.785259 containerd[1470]: time="2025-01-30T14:03:42.785028315Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:03:42.785259 containerd[1470]: time="2025-01-30T14:03:42.785055993Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:03:42.785259 containerd[1470]: time="2025-01-30T14:03:42.785102289Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.785259 containerd[1470]: time="2025-01-30T14:03:42.785127689Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.785259 containerd[1470]: time="2025-01-30T14:03:42.785151080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.785936 containerd[1470]: time="2025-01-30T14:03:42.785893243Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.786036 containerd[1470]: time="2025-01-30T14:03:42.785959018Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.786036 containerd[1470]: time="2025-01-30T14:03:42.785984389Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.786036 containerd[1470]: time="2025-01-30T14:03:42.786025903Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.786179 containerd[1470]: time="2025-01-30T14:03:42.786053645Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:03:42.786179 containerd[1470]: time="2025-01-30T14:03:42.786111473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786179 containerd[1470]: time="2025-01-30T14:03:42.786137085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786179 containerd[1470]: time="2025-01-30T14:03:42.786159419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786365 containerd[1470]: time="2025-01-30T14:03:42.786204685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786365 containerd[1470]: time="2025-01-30T14:03:42.786243983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786365 containerd[1470]: time="2025-01-30T14:03:42.786287782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786365 containerd[1470]: time="2025-01-30T14:03:42.786319084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786546 containerd[1470]: time="2025-01-30T14:03:42.786366195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786546 containerd[1470]: time="2025-01-30T14:03:42.786407643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786546 containerd[1470]: time="2025-01-30T14:03:42.786459292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786546 containerd[1470]: time="2025-01-30T14:03:42.786481465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786546 containerd[1470]: time="2025-01-30T14:03:42.786502954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786772 containerd[1470]: time="2025-01-30T14:03:42.786543785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786772 containerd[1470]: time="2025-01-30T14:03:42.786571483Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:03:42.786772 containerd[1470]: time="2025-01-30T14:03:42.786631066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786772 containerd[1470]: time="2025-01-30T14:03:42.786653153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.786772 containerd[1470]: time="2025-01-30T14:03:42.786691460Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.786804445Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.786966703Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.786993176Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.787033826Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.787052056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.787074162Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.787227171Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:03:42.789201 containerd[1470]: time="2025-01-30T14:03:42.787253884Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:03:42.792374 containerd[1470]: time="2025-01-30T14:03:42.791511681Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:03:42.792374 containerd[1470]: time="2025-01-30T14:03:42.791658953Z" level=info msg="Connect containerd service" Jan 30 14:03:42.792374 containerd[1470]: time="2025-01-30T14:03:42.791745875Z" level=info msg="using legacy CRI server" Jan 30 14:03:42.792374 containerd[1470]: time="2025-01-30T14:03:42.791760013Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:03:42.792374 containerd[1470]: time="2025-01-30T14:03:42.792007997Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:03:42.794911 containerd[1470]: time="2025-01-30T14:03:42.793501825Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:03:42.796237 containerd[1470]: time="2025-01-30T14:03:42.796157693Z" level=info msg="Start subscribing containerd event" Jan 30 14:03:42.796319 containerd[1470]: time="2025-01-30T14:03:42.796247709Z" level=info msg="Start recovering state" Jan 30 14:03:42.796369 containerd[1470]: time="2025-01-30T14:03:42.796346261Z" level=info msg="Start event monitor" Jan 30 14:03:42.796434 containerd[1470]: time="2025-01-30T14:03:42.796373470Z" level=info msg="Start snapshots syncer" Jan 30 14:03:42.796434 containerd[1470]: time="2025-01-30T14:03:42.796389299Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:03:42.796434 containerd[1470]: time="2025-01-30T14:03:42.796401953Z" level=info msg="Start streaming server" Jan 30 14:03:42.804976 containerd[1470]: time="2025-01-30T14:03:42.804927260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:03:42.805105 containerd[1470]: time="2025-01-30T14:03:42.805044180Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:03:42.816056 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:03:42.816451 containerd[1470]: time="2025-01-30T14:03:42.816372465Z" level=info msg="containerd successfully booted in 0.238022s" Jan 30 14:03:43.121495 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:03:43.189510 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:03:43.214410 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:03:43.230326 systemd[1]: Started sshd@0-10.128.0.55:22-139.178.68.195:36936.service - OpenSSH per-connection server daemon (139.178.68.195:36936). Jan 30 14:03:43.254339 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:03:43.254628 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:03:43.280112 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:03:43.330780 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:03:43.352474 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:03:43.369499 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:03:43.381367 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:03:43.407158 tar[1457]: linux-amd64/LICENSE Jan 30 14:03:43.407694 tar[1457]: linux-amd64/README.md Jan 30 14:03:43.434591 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:03:43.438035 instance-setup[1516]: INFO Running google_set_multiqueue. Jan 30 14:03:43.460257 instance-setup[1516]: INFO Set channels for eth0 to 2. Jan 30 14:03:43.465548 instance-setup[1516]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 30 14:03:43.467881 instance-setup[1516]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 30 14:03:43.467984 instance-setup[1516]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 30 14:03:43.470037 instance-setup[1516]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 30 14:03:43.470374 instance-setup[1516]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 30 14:03:43.473307 instance-setup[1516]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 30 14:03:43.473366 instance-setup[1516]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 30 14:03:43.475677 instance-setup[1516]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 30 14:03:43.486417 instance-setup[1516]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 14:03:43.494520 instance-setup[1516]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 30 14:03:43.497047 instance-setup[1516]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 30 14:03:43.497099 instance-setup[1516]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 30 14:03:43.527039 init.sh[1505]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 30 14:03:43.626876 sshd[1547]: Accepted publickey for core from 139.178.68.195 port 36936 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:03:43.633870 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:43.654386 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:03:43.674739 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:03:43.695467 systemd-logind[1441]: New session 1 of user core. Jan 30 14:03:43.725003 startup-script[1587]: INFO Starting startup scripts. Jan 30 14:03:43.729879 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:03:43.735171 startup-script[1587]: INFO No startup scripts found in metadata. Jan 30 14:03:43.735254 startup-script[1587]: INFO Finished running startup scripts. Jan 30 14:03:43.751402 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:03:43.777587 init.sh[1505]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 30 14:03:43.777587 init.sh[1505]: + daemon_pids=() Jan 30 14:03:43.777587 init.sh[1505]: + for d in accounts clock_skew network Jan 30 14:03:43.777587 init.sh[1505]: + daemon_pids+=($!) Jan 30 14:03:43.777587 init.sh[1505]: + for d in accounts clock_skew network Jan 30 14:03:43.777941 init.sh[1505]: + daemon_pids+=($!) Jan 30 14:03:43.777941 init.sh[1505]: + for d in accounts clock_skew network Jan 30 14:03:43.778022 init.sh[1593]: + /usr/bin/google_accounts_daemon Jan 30 14:03:43.779082 init.sh[1505]: + daemon_pids+=($!) Jan 30 14:03:43.779082 init.sh[1505]: + NOTIFY_SOCKET=/run/systemd/notify Jan 30 14:03:43.779082 init.sh[1505]: + /usr/bin/systemd-notify --ready Jan 30 14:03:43.779715 init.sh[1594]: + /usr/bin/google_clock_skew_daemon Jan 30 14:03:43.780685 init.sh[1595]: + /usr/bin/google_network_daemon Jan 30 14:03:43.784998 (systemd)[1592]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:03:43.805675 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 30 14:03:43.819767 init.sh[1505]: + wait -n 1593 1594 1595 Jan 30 14:03:44.065259 systemd[1592]: Queued start job for default target default.target. Jan 30 14:03:44.072607 systemd[1592]: Created slice app.slice - User Application Slice. Jan 30 14:03:44.072668 systemd[1592]: Reached target paths.target - Paths. Jan 30 14:03:44.072694 systemd[1592]: Reached target timers.target - Timers. Jan 30 14:03:44.076007 systemd[1592]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:03:44.111303 systemd[1592]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:03:44.111562 systemd[1592]: Reached target sockets.target - Sockets. Jan 30 14:03:44.111593 systemd[1592]: Reached target basic.target - Basic System. Jan 30 14:03:44.111670 systemd[1592]: Reached target default.target - Main User Target. Jan 30 14:03:44.111725 systemd[1592]: Startup finished in 301ms. Jan 30 14:03:44.111910 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:03:44.130088 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:03:44.238545 google-clock-skew[1594]: INFO Starting Google Clock Skew daemon. Jan 30 14:03:44.248791 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:37%2]:123 Jan 30 14:03:44.250198 ntpd[1428]: 30 Jan 14:03:44 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:37%2]:123 Jan 30 14:03:44.254153 google-clock-skew[1594]: INFO Clock drift token has changed: 0. Jan 30 14:03:44.335298 google-networking[1595]: INFO Starting Google Networking daemon. Jan 30 14:03:44.400762 systemd[1]: Started sshd@1-10.128.0.55:22-139.178.68.195:36942.service - OpenSSH per-connection server daemon (139.178.68.195:36942). Jan 30 14:03:44.492678 groupadd[1616]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 30 14:03:44.500283 groupadd[1616]: group added to /etc/gshadow: name=google-sudoers Jan 30 14:03:44.561874 groupadd[1616]: new group: name=google-sudoers, GID=1000 Jan 30 14:03:44.592289 google-accounts[1593]: INFO Starting Google Accounts daemon. Jan 30 14:03:44.605653 google-accounts[1593]: WARNING OS Login not installed. Jan 30 14:03:44.607350 google-accounts[1593]: INFO Creating a new user account for 0. Jan 30 14:03:44.614079 init.sh[1625]: useradd: invalid user name '0': use --badname to ignore Jan 30 14:03:44.613543 google-accounts[1593]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 30 14:03:44.656063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:44.670089 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:03:44.675409 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:03:44.680725 systemd[1]: Startup finished in 1.055s (kernel) + 11.353s (initrd) + 9.627s (userspace) = 22.035s. Jan 30 14:03:44.728803 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 36942 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:03:44.730877 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:44.738192 systemd-logind[1441]: New session 2 of user core. Jan 30 14:03:44.747138 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:03:45.000623 systemd-resolved[1314]: Clock change detected. Flushing caches. Jan 30 14:03:45.002300 google-clock-skew[1594]: INFO Synced system time with hardware clock. Jan 30 14:03:45.035977 sshd[1615]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:45.041020 systemd[1]: sshd@1-10.128.0.55:22-139.178.68.195:36942.service: Deactivated successfully. Jan 30 14:03:45.044326 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:03:45.047125 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:03:45.049409 systemd-logind[1441]: Removed session 2. Jan 30 14:03:45.092923 systemd[1]: Started sshd@2-10.128.0.55:22-139.178.68.195:51304.service - OpenSSH per-connection server daemon (139.178.68.195:51304). Jan 30 14:03:45.385065 sshd[1646]: Accepted publickey for core from 139.178.68.195 port 51304 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:03:45.386619 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:45.394321 systemd-logind[1441]: New session 3 of user core. Jan 30 14:03:45.398708 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:03:45.590583 sshd[1646]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:45.598108 systemd[1]: sshd@2-10.128.0.55:22-139.178.68.195:51304.service: Deactivated successfully. Jan 30 14:03:45.600969 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:03:45.602094 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:03:45.603795 systemd-logind[1441]: Removed session 3. Jan 30 14:03:45.647262 systemd[1]: Started sshd@3-10.128.0.55:22-139.178.68.195:51308.service - OpenSSH per-connection server daemon (139.178.68.195:51308). Jan 30 14:03:45.730987 kubelet[1632]: E0130 14:03:45.730903 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:03:45.734112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:03:45.734385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:03:45.734851 systemd[1]: kubelet.service: Consumed 1.242s CPU time. Jan 30 14:03:45.926553 sshd[1653]: Accepted publickey for core from 139.178.68.195 port 51308 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:03:45.928771 sshd[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:45.935607 systemd-logind[1441]: New session 4 of user core. Jan 30 14:03:45.947698 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:03:46.140048 sshd[1653]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:46.144479 systemd[1]: sshd@3-10.128.0.55:22-139.178.68.195:51308.service: Deactivated successfully. Jan 30 14:03:46.146884 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:03:46.148738 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:03:46.150159 systemd-logind[1441]: Removed session 4. Jan 30 14:03:46.194797 systemd[1]: Started sshd@4-10.128.0.55:22-139.178.68.195:51324.service - OpenSSH per-connection server daemon (139.178.68.195:51324). Jan 30 14:03:46.486873 sshd[1663]: Accepted publickey for core from 139.178.68.195 port 51324 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:03:46.488791 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:46.496229 systemd-logind[1441]: New session 5 of user core. Jan 30 14:03:46.501694 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:03:46.679417 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:03:46.679837 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:03:46.696324 sudo[1666]: pam_unix(sudo:session): session closed for user root Jan 30 14:03:46.738746 sshd[1663]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:46.744890 systemd[1]: sshd@4-10.128.0.55:22-139.178.68.195:51324.service: Deactivated successfully. Jan 30 14:03:46.747171 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:03:46.748145 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:03:46.749615 systemd-logind[1441]: Removed session 5. Jan 30 14:03:46.793834 systemd[1]: Started sshd@5-10.128.0.55:22-139.178.68.195:51336.service - OpenSSH per-connection server daemon (139.178.68.195:51336). Jan 30 14:03:47.067727 sshd[1671]: Accepted publickey for core from 139.178.68.195 port 51336 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:03:47.069297 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:47.075651 systemd-logind[1441]: New session 6 of user core. Jan 30 14:03:47.086696 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:03:47.244773 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:03:47.245287 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:03:47.250442 sudo[1675]: pam_unix(sudo:session): session closed for user root Jan 30 14:03:47.264775 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:03:47.265278 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:03:47.282838 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:03:47.296953 auditctl[1678]: No rules Jan 30 14:03:47.298721 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:03:47.299041 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:03:47.305017 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:03:47.346627 augenrules[1696]: No rules Jan 30 14:03:47.348953 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:03:47.351346 sudo[1674]: pam_unix(sudo:session): session closed for user root Jan 30 14:03:47.392692 sshd[1671]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:47.397758 systemd[1]: sshd@5-10.128.0.55:22-139.178.68.195:51336.service: Deactivated successfully. Jan 30 14:03:47.400508 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:03:47.402758 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:03:47.404297 systemd-logind[1441]: Removed session 6. Jan 30 14:03:47.446826 systemd[1]: Started sshd@6-10.128.0.55:22-139.178.68.195:51350.service - OpenSSH per-connection server daemon (139.178.68.195:51350). Jan 30 14:03:47.729473 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 51350 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:03:47.731311 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:03:47.736840 systemd-logind[1441]: New session 7 of user core. Jan 30 14:03:47.748704 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:03:47.907717 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:03:47.908234 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:03:48.362800 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:03:48.375113 (dockerd)[1722]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:03:48.817916 dockerd[1722]: time="2025-01-30T14:03:48.817825935Z" level=info msg="Starting up" Jan 30 14:03:49.038396 dockerd[1722]: time="2025-01-30T14:03:49.038067779Z" level=info msg="Loading containers: start." Jan 30 14:03:49.190404 kernel: Initializing XFRM netlink socket Jan 30 14:03:49.308883 systemd-networkd[1371]: docker0: Link UP Jan 30 14:03:49.333876 dockerd[1722]: time="2025-01-30T14:03:49.333810023Z" level=info msg="Loading containers: done." Jan 30 14:03:49.357697 dockerd[1722]: time="2025-01-30T14:03:49.357625263Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:03:49.357966 dockerd[1722]: time="2025-01-30T14:03:49.357797718Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:03:49.358034 dockerd[1722]: time="2025-01-30T14:03:49.357977525Z" level=info msg="Daemon has completed initialization" Jan 30 14:03:49.406938 dockerd[1722]: time="2025-01-30T14:03:49.406723984Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:03:49.407309 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:03:50.358730 containerd[1470]: time="2025-01-30T14:03:50.358681859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 14:03:50.897935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3018087908.mount: Deactivated successfully. Jan 30 14:03:52.381882 containerd[1470]: time="2025-01-30T14:03:52.381803093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:52.383471 containerd[1470]: time="2025-01-30T14:03:52.383396434Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27983349" Jan 30 14:03:52.385281 containerd[1470]: time="2025-01-30T14:03:52.385231925Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:52.395330 containerd[1470]: time="2025-01-30T14:03:52.395114247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:52.397952 containerd[1470]: time="2025-01-30T14:03:52.397350029Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.038612117s" Jan 30 14:03:52.397952 containerd[1470]: time="2025-01-30T14:03:52.397483613Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 14:03:52.400890 containerd[1470]: time="2025-01-30T14:03:52.400842401Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 14:03:53.770272 containerd[1470]: time="2025-01-30T14:03:53.770203084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:53.771833 containerd[1470]: time="2025-01-30T14:03:53.771750034Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24703077" Jan 30 14:03:53.773549 containerd[1470]: time="2025-01-30T14:03:53.773474022Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:53.777821 containerd[1470]: time="2025-01-30T14:03:53.777735363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:53.779384 containerd[1470]: time="2025-01-30T14:03:53.779136078Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.378236837s" Jan 30 14:03:53.779384 containerd[1470]: time="2025-01-30T14:03:53.779193884Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 14:03:53.780503 containerd[1470]: time="2025-01-30T14:03:53.780294226Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 14:03:54.967946 containerd[1470]: time="2025-01-30T14:03:54.967866890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:54.969557 containerd[1470]: time="2025-01-30T14:03:54.969482196Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18653969" Jan 30 14:03:54.971348 containerd[1470]: time="2025-01-30T14:03:54.971275803Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:54.975435 containerd[1470]: time="2025-01-30T14:03:54.975351035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:54.977112 containerd[1470]: time="2025-01-30T14:03:54.976935416Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.196597415s" Jan 30 14:03:54.977112 containerd[1470]: time="2025-01-30T14:03:54.976986031Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 14:03:54.978220 containerd[1470]: time="2025-01-30T14:03:54.977954835Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 14:03:55.932205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:03:55.940627 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:56.174750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022990141.mount: Deactivated successfully. Jan 30 14:03:56.208814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:56.215803 (kubelet)[1937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:03:56.285427 kubelet[1937]: E0130 14:03:56.285305 1937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:03:56.294541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:03:56.295952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:03:56.868943 containerd[1470]: time="2025-01-30T14:03:56.868862393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:56.870538 containerd[1470]: time="2025-01-30T14:03:56.870447000Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30233023" Jan 30 14:03:56.872282 containerd[1470]: time="2025-01-30T14:03:56.872207450Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:56.875304 containerd[1470]: time="2025-01-30T14:03:56.875233343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:56.876407 containerd[1470]: time="2025-01-30T14:03:56.876139963Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.898138103s" Jan 30 14:03:56.876407 containerd[1470]: time="2025-01-30T14:03:56.876193453Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 14:03:56.876885 containerd[1470]: time="2025-01-30T14:03:56.876824325Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:03:57.309146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955682956.mount: Deactivated successfully. Jan 30 14:03:58.396294 containerd[1470]: time="2025-01-30T14:03:58.396211453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:58.398060 containerd[1470]: time="2025-01-30T14:03:58.397985920Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 30 14:03:58.399403 containerd[1470]: time="2025-01-30T14:03:58.399294971Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:58.403272 containerd[1470]: time="2025-01-30T14:03:58.403200536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:58.404967 containerd[1470]: time="2025-01-30T14:03:58.404764394Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.527888476s" Jan 30 14:03:58.404967 containerd[1470]: time="2025-01-30T14:03:58.404823021Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 14:03:58.405983 containerd[1470]: time="2025-01-30T14:03:58.405720013Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 14:03:58.827820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2608404818.mount: Deactivated successfully. Jan 30 14:03:58.835939 containerd[1470]: time="2025-01-30T14:03:58.835857888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:58.837114 containerd[1470]: time="2025-01-30T14:03:58.837036552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jan 30 14:03:58.838564 containerd[1470]: time="2025-01-30T14:03:58.838497371Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:58.844131 containerd[1470]: time="2025-01-30T14:03:58.843405333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:58.844818 containerd[1470]: time="2025-01-30T14:03:58.844592543Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 438.828017ms" Jan 30 14:03:58.844818 containerd[1470]: time="2025-01-30T14:03:58.844642573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 14:03:58.845714 containerd[1470]: time="2025-01-30T14:03:58.845675534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 14:03:59.344450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052648627.mount: Deactivated successfully. Jan 30 14:04:01.491582 containerd[1470]: time="2025-01-30T14:04:01.491504351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:01.493344 containerd[1470]: time="2025-01-30T14:04:01.493258520Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Jan 30 14:04:01.495381 containerd[1470]: time="2025-01-30T14:04:01.495274615Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:01.500895 containerd[1470]: time="2025-01-30T14:04:01.500793804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:01.502948 containerd[1470]: time="2025-01-30T14:04:01.502726262Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.657007174s" Jan 30 14:04:01.502948 containerd[1470]: time="2025-01-30T14:04:01.502784291Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 14:04:05.580992 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:04:05.587785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:04:05.635195 systemd[1]: Reloading requested from client PID 2077 ('systemctl') (unit session-7.scope)... Jan 30 14:04:05.635219 systemd[1]: Reloading... Jan 30 14:04:05.781520 zram_generator::config[2115]: No configuration found. Jan 30 14:04:05.958114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:04:06.060571 systemd[1]: Reloading finished in 424 ms. Jan 30 14:04:06.122135 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:04:06.122266 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:04:06.122752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:04:06.127779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:04:06.508493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:04:06.525162 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:04:06.587514 kubelet[2169]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:04:06.587514 kubelet[2169]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:04:06.587514 kubelet[2169]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:04:06.591971 kubelet[2169]: I0130 14:04:06.591868 2169 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:04:07.252258 kubelet[2169]: I0130 14:04:07.250650 2169 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:04:07.252258 kubelet[2169]: I0130 14:04:07.250695 2169 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:04:07.252258 kubelet[2169]: I0130 14:04:07.251323 2169 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:04:07.294716 kubelet[2169]: I0130 14:04:07.294106 2169 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:04:07.294716 kubelet[2169]: E0130 14:04:07.294640 2169 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:07.307351 kubelet[2169]: E0130 14:04:07.307298 2169 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:04:07.307351 kubelet[2169]: I0130 14:04:07.307348 2169 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:04:07.315498 kubelet[2169]: I0130 14:04:07.315434 2169 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:04:07.315675 kubelet[2169]: I0130 14:04:07.315626 2169 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:04:07.315916 kubelet[2169]: I0130 14:04:07.315838 2169 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:04:07.316167 kubelet[2169]: I0130 14:04:07.315894 2169 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:04:07.316167 kubelet[2169]: I0130 14:04:07.316161 2169 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:04:07.316463 kubelet[2169]: I0130 14:04:07.316179 2169 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:04:07.316463 kubelet[2169]: I0130 14:04:07.316339 2169 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:04:07.321009 kubelet[2169]: I0130 14:04:07.320577 2169 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:04:07.321009 kubelet[2169]: I0130 14:04:07.320644 2169 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:04:07.321009 kubelet[2169]: I0130 14:04:07.320710 2169 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:04:07.321009 kubelet[2169]: I0130 14:04:07.320736 2169 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:04:07.332461 kubelet[2169]: W0130 14:04:07.331871 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:07.332461 kubelet[2169]: E0130 14:04:07.332358 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:07.332985 kubelet[2169]: I0130 14:04:07.332843 2169 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:04:07.336436 kubelet[2169]: I0130 14:04:07.336404 2169 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:04:07.336805 kubelet[2169]: W0130 14:04:07.336672 2169 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:04:07.338091 kubelet[2169]: I0130 14:04:07.337877 2169 server.go:1269] "Started kubelet" Jan 30 14:04:07.341758 kubelet[2169]: W0130 14:04:07.341582 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:07.341758 kubelet[2169]: E0130 14:04:07.341665 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:07.341967 kubelet[2169]: I0130 14:04:07.341782 2169 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:04:07.349830 kubelet[2169]: I0130 14:04:07.349149 2169 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:04:07.349830 kubelet[2169]: I0130 14:04:07.349733 2169 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:04:07.356134 kubelet[2169]: I0130 14:04:07.355921 2169 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:04:07.359401 kubelet[2169]: E0130 14:04:07.354171 2169 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal.181f7d6376debdea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,UID:ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,},FirstTimestamp:2025-01-30 14:04:07.337844202 +0000 UTC m=+0.806848913,LastTimestamp:2025-01-30 14:04:07.337844202 +0000 UTC m=+0.806848913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,}" Jan 30 14:04:07.359401 kubelet[2169]: I0130 14:04:07.358461 2169 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:04:07.361732 kubelet[2169]: I0130 14:04:07.361701 2169 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:04:07.362097 kubelet[2169]: E0130 14:04:07.362025 2169 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" Jan 30 14:04:07.366081 kubelet[2169]: I0130 14:04:07.366054 2169 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:04:07.366490 kubelet[2169]: I0130 14:04:07.366466 2169 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:04:07.366720 kubelet[2169]: I0130 14:04:07.366703 2169 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:04:07.367725 kubelet[2169]: E0130 14:04:07.367679 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.55:6443: connect: connection refused" interval="200ms" Jan 30 14:04:07.369061 kubelet[2169]: W0130 14:04:07.368986 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:07.369172 kubelet[2169]: E0130 14:04:07.369078 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:07.369280 kubelet[2169]: I0130 14:04:07.369257 2169 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:04:07.369418 kubelet[2169]: I0130 14:04:07.369391 2169 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:04:07.371316 kubelet[2169]: E0130 14:04:07.371285 2169 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:04:07.372324 kubelet[2169]: I0130 14:04:07.371693 2169 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:04:07.401111 kubelet[2169]: I0130 14:04:07.401053 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:04:07.404948 kubelet[2169]: I0130 14:04:07.404664 2169 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:04:07.404948 kubelet[2169]: I0130 14:04:07.404722 2169 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:04:07.404948 kubelet[2169]: I0130 14:04:07.404751 2169 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:04:07.404948 kubelet[2169]: E0130 14:04:07.404814 2169 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:04:07.407300 kubelet[2169]: I0130 14:04:07.406958 2169 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:04:07.407300 kubelet[2169]: I0130 14:04:07.406982 2169 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:04:07.407300 kubelet[2169]: I0130 14:04:07.407006 2169 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:04:07.409676 kubelet[2169]: W0130 14:04:07.409438 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:07.409676 kubelet[2169]: E0130 14:04:07.409542 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:07.462844 kubelet[2169]: E0130 14:04:07.462770 2169 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" Jan 30 14:04:07.505771 kubelet[2169]: E0130 14:04:07.505578 2169 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:04:07.536565 kubelet[2169]: I0130 14:04:07.536326 2169 policy_none.go:49] "None policy: Start" Jan 30 14:04:07.537820 kubelet[2169]: I0130 14:04:07.537714 2169 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:04:07.537820 kubelet[2169]: I0130 14:04:07.537757 2169 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:04:07.563041 kubelet[2169]: E0130 14:04:07.562968 2169 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" Jan 30 14:04:07.569608 kubelet[2169]: E0130 14:04:07.569549 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.55:6443: connect: connection refused" interval="400ms" Jan 30 14:04:07.663798 kubelet[2169]: E0130 14:04:07.663726 2169 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" Jan 30 14:04:07.694128 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:04:07.706555 kubelet[2169]: E0130 14:04:07.706484 2169 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:04:07.715098 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:04:07.720559 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:04:07.732802 kubelet[2169]: I0130 14:04:07.732763 2169 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:04:07.733087 kubelet[2169]: I0130 14:04:07.733062 2169 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:04:07.733227 kubelet[2169]: I0130 14:04:07.733095 2169 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:04:07.734487 kubelet[2169]: I0130 14:04:07.733670 2169 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:04:07.737468 kubelet[2169]: E0130 14:04:07.737442 2169 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" Jan 30 14:04:07.841278 kubelet[2169]: I0130 14:04:07.841118 2169 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:07.841775 kubelet[2169]: E0130 14:04:07.841689 2169 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.55:6443/api/v1/nodes\": dial tcp 10.128.0.55:6443: connect: connection refused" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:07.970484 kubelet[2169]: E0130 14:04:07.970404 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.55:6443: connect: connection refused" interval="800ms" Jan 30 14:04:08.049246 kubelet[2169]: I0130 14:04:08.049181 2169 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.049689 kubelet[2169]: E0130 14:04:08.049643 2169 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.55:6443/api/v1/nodes\": dial tcp 10.128.0.55:6443: connect: connection refused" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.129889 systemd[1]: Created slice kubepods-burstable-podb3b75592b65796de3a8437b5d57c88e8.slice - libcontainer container kubepods-burstable-podb3b75592b65796de3a8437b5d57c88e8.slice. Jan 30 14:04:08.145895 systemd[1]: Created slice kubepods-burstable-pod94b86b7991e030bef0da539d716249b4.slice - libcontainer container kubepods-burstable-pod94b86b7991e030bef0da539d716249b4.slice. Jan 30 14:04:08.152679 systemd[1]: Created slice kubepods-burstable-pode8a2b6237404a937d4ebaf11b1170d2f.slice - libcontainer container kubepods-burstable-pode8a2b6237404a937d4ebaf11b1170d2f.slice. Jan 30 14:04:08.171789 kubelet[2169]: I0130 14:04:08.171732 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8a2b6237404a937d4ebaf11b1170d2f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"e8a2b6237404a937d4ebaf11b1170d2f\") " pod="kube-system/kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.171789 kubelet[2169]: I0130 14:04:08.171798 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3b75592b65796de3a8437b5d57c88e8-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"b3b75592b65796de3a8437b5d57c88e8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.172050 kubelet[2169]: I0130 14:04:08.171829 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.172050 kubelet[2169]: I0130 14:04:08.171863 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.172050 kubelet[2169]: I0130 14:04:08.171891 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3b75592b65796de3a8437b5d57c88e8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"b3b75592b65796de3a8437b5d57c88e8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.172050 kubelet[2169]: I0130 14:04:08.171916 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3b75592b65796de3a8437b5d57c88e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"b3b75592b65796de3a8437b5d57c88e8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.172266 kubelet[2169]: I0130 14:04:08.171952 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.172266 kubelet[2169]: I0130 14:04:08.171981 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.172266 kubelet[2169]: I0130 14:04:08.172014 2169 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.217678 kubelet[2169]: W0130 14:04:08.217567 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:08.217678 kubelet[2169]: E0130 14:04:08.217675 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:08.440709 containerd[1470]: time="2025-01-30T14:04:08.440654234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,Uid:b3b75592b65796de3a8437b5d57c88e8,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:08.450863 containerd[1470]: time="2025-01-30T14:04:08.450680500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,Uid:94b86b7991e030bef0da539d716249b4,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:08.457440 containerd[1470]: time="2025-01-30T14:04:08.457017509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,Uid:e8a2b6237404a937d4ebaf11b1170d2f,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:08.462104 kubelet[2169]: I0130 14:04:08.462060 2169 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.463429 kubelet[2169]: E0130 14:04:08.463346 2169 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.55:6443/api/v1/nodes\": dial tcp 10.128.0.55:6443: connect: connection refused" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:08.733787 kubelet[2169]: W0130 14:04:08.733606 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:08.733787 kubelet[2169]: E0130 14:04:08.733680 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:08.771849 kubelet[2169]: E0130 14:04:08.771782 2169 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.55:6443: connect: connection refused" interval="1.6s" Jan 30 14:04:08.804107 kubelet[2169]: W0130 14:04:08.803980 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:08.804107 kubelet[2169]: E0130 14:04:08.804070 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:08.872654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932030871.mount: Deactivated successfully. Jan 30 14:04:08.885934 containerd[1470]: time="2025-01-30T14:04:08.885792140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:04:08.887535 containerd[1470]: time="2025-01-30T14:04:08.887463964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 30 14:04:08.888925 containerd[1470]: time="2025-01-30T14:04:08.888861141Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:04:08.890657 containerd[1470]: time="2025-01-30T14:04:08.890595801Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:04:08.892227 containerd[1470]: time="2025-01-30T14:04:08.892156342Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:04:08.893749 containerd[1470]: time="2025-01-30T14:04:08.893682710Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:04:08.895061 containerd[1470]: time="2025-01-30T14:04:08.894930160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:04:08.899413 containerd[1470]: time="2025-01-30T14:04:08.899331243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:04:08.900807 containerd[1470]: time="2025-01-30T14:04:08.900503871Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 443.377238ms" Jan 30 14:04:08.903882 containerd[1470]: time="2025-01-30T14:04:08.903581075Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 452.798268ms" Jan 30 14:04:08.904483 containerd[1470]: time="2025-01-30T14:04:08.904429096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 463.666343ms" Jan 30 14:04:08.977904 kubelet[2169]: W0130 14:04:08.977797 2169 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.55:6443: connect: connection refused Jan 30 14:04:08.977904 kubelet[2169]: E0130 14:04:08.977867 2169 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:09.109492 containerd[1470]: time="2025-01-30T14:04:09.108732179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:09.109492 containerd[1470]: time="2025-01-30T14:04:09.108884365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:09.109492 containerd[1470]: time="2025-01-30T14:04:09.108911095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:09.111578 containerd[1470]: time="2025-01-30T14:04:09.110840760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:09.111806 containerd[1470]: time="2025-01-30T14:04:09.111700430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:09.112074 containerd[1470]: time="2025-01-30T14:04:09.112023550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:09.112262 containerd[1470]: time="2025-01-30T14:04:09.112074598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:09.112262 containerd[1470]: time="2025-01-30T14:04:09.112208735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:09.122413 containerd[1470]: time="2025-01-30T14:04:09.121732019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:09.122413 containerd[1470]: time="2025-01-30T14:04:09.121819087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:09.122413 containerd[1470]: time="2025-01-30T14:04:09.121841692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:09.122413 containerd[1470]: time="2025-01-30T14:04:09.121987298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:09.158556 systemd[1]: Started cri-containerd-bee7fbf422714e95baa178a2840bb71553fd39fa6c3650d671ae8aa848820c63.scope - libcontainer container bee7fbf422714e95baa178a2840bb71553fd39fa6c3650d671ae8aa848820c63. Jan 30 14:04:09.170621 systemd[1]: Started cri-containerd-9e052cc9e705fb6a2b070a78ea1044cd8d9a51f374b197d437e22be07f6fe78d.scope - libcontainer container 9e052cc9e705fb6a2b070a78ea1044cd8d9a51f374b197d437e22be07f6fe78d. Jan 30 14:04:09.173914 systemd[1]: Started cri-containerd-ec9c82a24fd2d0d353ea64519ae0d28b1743816639105a53032876c45d55c723.scope - libcontainer container ec9c82a24fd2d0d353ea64519ae0d28b1743816639105a53032876c45d55c723. Jan 30 14:04:09.280441 kubelet[2169]: I0130 14:04:09.280036 2169 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:09.281404 kubelet[2169]: E0130 14:04:09.281309 2169 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.55:6443/api/v1/nodes\": dial tcp 10.128.0.55:6443: connect: connection refused" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:09.290217 containerd[1470]: time="2025-01-30T14:04:09.290162579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,Uid:e8a2b6237404a937d4ebaf11b1170d2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"bee7fbf422714e95baa178a2840bb71553fd39fa6c3650d671ae8aa848820c63\"" Jan 30 14:04:09.295729 kubelet[2169]: E0130 14:04:09.295683 2169 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-21291" Jan 30 14:04:09.300693 containerd[1470]: time="2025-01-30T14:04:09.300530738Z" level=info msg="CreateContainer within sandbox \"bee7fbf422714e95baa178a2840bb71553fd39fa6c3650d671ae8aa848820c63\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:04:09.304850 containerd[1470]: time="2025-01-30T14:04:09.304608121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,Uid:94b86b7991e030bef0da539d716249b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e052cc9e705fb6a2b070a78ea1044cd8d9a51f374b197d437e22be07f6fe78d\"" Jan 30 14:04:09.307687 containerd[1470]: time="2025-01-30T14:04:09.307522512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal,Uid:b3b75592b65796de3a8437b5d57c88e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec9c82a24fd2d0d353ea64519ae0d28b1743816639105a53032876c45d55c723\"" Jan 30 14:04:09.308789 kubelet[2169]: E0130 14:04:09.308742 2169 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flat" Jan 30 14:04:09.311295 kubelet[2169]: E0130 14:04:09.311071 2169 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-21291" Jan 30 14:04:09.311633 containerd[1470]: time="2025-01-30T14:04:09.311473058Z" level=info msg="CreateContainer within sandbox \"9e052cc9e705fb6a2b070a78ea1044cd8d9a51f374b197d437e22be07f6fe78d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:04:09.313518 containerd[1470]: time="2025-01-30T14:04:09.313244413Z" level=info msg="CreateContainer within sandbox \"ec9c82a24fd2d0d353ea64519ae0d28b1743816639105a53032876c45d55c723\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:04:09.318819 kubelet[2169]: E0130 14:04:09.318773 2169 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.55:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:04:09.341335 containerd[1470]: time="2025-01-30T14:04:09.341214051Z" level=info msg="CreateContainer within sandbox \"bee7fbf422714e95baa178a2840bb71553fd39fa6c3650d671ae8aa848820c63\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57427f8738d6b9d1c318a03e42b81797d380065f5339e891699d5c42d43bcb41\"" Jan 30 14:04:09.342753 containerd[1470]: time="2025-01-30T14:04:09.342550101Z" level=info msg="StartContainer for \"57427f8738d6b9d1c318a03e42b81797d380065f5339e891699d5c42d43bcb41\"" Jan 30 14:04:09.346489 containerd[1470]: time="2025-01-30T14:04:09.346258097Z" level=info msg="CreateContainer within sandbox \"ec9c82a24fd2d0d353ea64519ae0d28b1743816639105a53032876c45d55c723\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"294fb2d242600532f3928bb35c55c4b8f21e8e4e0ee295683f70b1c7d06b81fe\"" Jan 30 14:04:09.347549 containerd[1470]: time="2025-01-30T14:04:09.347039310Z" level=info msg="CreateContainer within sandbox \"9e052cc9e705fb6a2b070a78ea1044cd8d9a51f374b197d437e22be07f6fe78d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"59c30d6631391d1d527f3f8079c5b9893bccc9abad6fe5f9545a939a1df89eb6\"" Jan 30 14:04:09.349391 containerd[1470]: time="2025-01-30T14:04:09.348170943Z" level=info msg="StartContainer for \"294fb2d242600532f3928bb35c55c4b8f21e8e4e0ee295683f70b1c7d06b81fe\"" Jan 30 14:04:09.349715 containerd[1470]: time="2025-01-30T14:04:09.349673052Z" level=info msg="StartContainer for \"59c30d6631391d1d527f3f8079c5b9893bccc9abad6fe5f9545a939a1df89eb6\"" Jan 30 14:04:09.402733 systemd[1]: Started cri-containerd-57427f8738d6b9d1c318a03e42b81797d380065f5339e891699d5c42d43bcb41.scope - libcontainer container 57427f8738d6b9d1c318a03e42b81797d380065f5339e891699d5c42d43bcb41. Jan 30 14:04:09.420480 systemd[1]: Started cri-containerd-59c30d6631391d1d527f3f8079c5b9893bccc9abad6fe5f9545a939a1df89eb6.scope - libcontainer container 59c30d6631391d1d527f3f8079c5b9893bccc9abad6fe5f9545a939a1df89eb6. Jan 30 14:04:09.439523 systemd[1]: Started cri-containerd-294fb2d242600532f3928bb35c55c4b8f21e8e4e0ee295683f70b1c7d06b81fe.scope - libcontainer container 294fb2d242600532f3928bb35c55c4b8f21e8e4e0ee295683f70b1c7d06b81fe. Jan 30 14:04:09.529869 containerd[1470]: time="2025-01-30T14:04:09.529581872Z" level=info msg="StartContainer for \"59c30d6631391d1d527f3f8079c5b9893bccc9abad6fe5f9545a939a1df89eb6\" returns successfully" Jan 30 14:04:09.556669 containerd[1470]: time="2025-01-30T14:04:09.554341731Z" level=info msg="StartContainer for \"294fb2d242600532f3928bb35c55c4b8f21e8e4e0ee295683f70b1c7d06b81fe\" returns successfully" Jan 30 14:04:09.576471 containerd[1470]: time="2025-01-30T14:04:09.576414406Z" level=info msg="StartContainer for \"57427f8738d6b9d1c318a03e42b81797d380065f5339e891699d5c42d43bcb41\" returns successfully" Jan 30 14:04:10.888667 kubelet[2169]: I0130 14:04:10.888625 2169 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:12.538762 kubelet[2169]: E0130 14:04:12.538711 2169 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:12.597406 kubelet[2169]: I0130 14:04:12.596805 2169 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:12.597406 kubelet[2169]: E0130 14:04:12.596952 2169 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\": node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" Jan 30 14:04:12.618743 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 14:04:13.344738 kubelet[2169]: I0130 14:04:13.344676 2169 apiserver.go:52] "Watching apiserver" Jan 30 14:04:13.367863 kubelet[2169]: I0130 14:04:13.367792 2169 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:04:14.504305 systemd[1]: Reloading requested from client PID 2442 ('systemctl') (unit session-7.scope)... Jan 30 14:04:14.504857 systemd[1]: Reloading... Jan 30 14:04:14.675423 zram_generator::config[2485]: No configuration found. Jan 30 14:04:14.819635 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:04:14.954331 systemd[1]: Reloading finished in 448 ms. Jan 30 14:04:15.004792 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:04:15.016085 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:04:15.016482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:04:15.016577 systemd[1]: kubelet.service: Consumed 1.363s CPU time, 119.5M memory peak, 0B memory swap peak. Jan 30 14:04:15.022992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:04:15.251081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:04:15.263234 (kubelet)[2530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:04:15.331852 kubelet[2530]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:04:15.331852 kubelet[2530]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:04:15.331852 kubelet[2530]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:04:15.332484 kubelet[2530]: I0130 14:04:15.331954 2530 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:04:15.342448 kubelet[2530]: I0130 14:04:15.341795 2530 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 14:04:15.342448 kubelet[2530]: I0130 14:04:15.341834 2530 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:04:15.344153 kubelet[2530]: I0130 14:04:15.342973 2530 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 14:04:15.345462 kubelet[2530]: I0130 14:04:15.345418 2530 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:04:15.348461 kubelet[2530]: I0130 14:04:15.348426 2530 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:04:15.352622 kubelet[2530]: E0130 14:04:15.352527 2530 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:04:15.352622 kubelet[2530]: I0130 14:04:15.352568 2530 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:04:15.357022 kubelet[2530]: I0130 14:04:15.356982 2530 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:04:15.357304 kubelet[2530]: I0130 14:04:15.357180 2530 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 14:04:15.357951 kubelet[2530]: I0130 14:04:15.357401 2530 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:04:15.357951 kubelet[2530]: I0130 14:04:15.357454 2530 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:04:15.357951 kubelet[2530]: I0130 14:04:15.357865 2530 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:04:15.357951 kubelet[2530]: I0130 14:04:15.357880 2530 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 14:04:15.358478 kubelet[2530]: I0130 14:04:15.357928 2530 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:04:15.358478 kubelet[2530]: I0130 14:04:15.358096 2530 kubelet.go:408] "Attempting to sync node with API server" Jan 30 14:04:15.358478 kubelet[2530]: I0130 14:04:15.358119 2530 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:04:15.358478 kubelet[2530]: I0130 14:04:15.358164 2530 kubelet.go:314] "Adding apiserver pod source" Jan 30 14:04:15.358478 kubelet[2530]: I0130 14:04:15.358189 2530 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:04:15.363443 kubelet[2530]: I0130 14:04:15.362184 2530 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:04:15.363443 kubelet[2530]: I0130 14:04:15.362895 2530 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:04:15.363810 kubelet[2530]: I0130 14:04:15.363792 2530 server.go:1269] "Started kubelet" Jan 30 14:04:15.372397 kubelet[2530]: I0130 14:04:15.371117 2530 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:04:15.372826 kubelet[2530]: I0130 14:04:15.372766 2530 server.go:460] "Adding debug handlers to kubelet server" Jan 30 14:04:15.379434 kubelet[2530]: I0130 14:04:15.376982 2530 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:04:15.380448 kubelet[2530]: I0130 14:04:15.380351 2530 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:04:15.382676 kubelet[2530]: I0130 14:04:15.382648 2530 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:04:15.386270 kubelet[2530]: I0130 14:04:15.386221 2530 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:04:15.392388 kubelet[2530]: I0130 14:04:15.389687 2530 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 14:04:15.392388 kubelet[2530]: E0130 14:04:15.390130 2530 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" not found" Jan 30 14:04:15.395399 kubelet[2530]: I0130 14:04:15.393630 2530 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 14:04:15.395399 kubelet[2530]: I0130 14:04:15.393867 2530 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:04:15.399394 kubelet[2530]: I0130 14:04:15.396934 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:04:15.399394 kubelet[2530]: I0130 14:04:15.398870 2530 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:04:15.399394 kubelet[2530]: I0130 14:04:15.398918 2530 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:04:15.399394 kubelet[2530]: I0130 14:04:15.398943 2530 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 14:04:15.399394 kubelet[2530]: E0130 14:04:15.399008 2530 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:04:15.412001 kubelet[2530]: I0130 14:04:15.411966 2530 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:04:15.412182 kubelet[2530]: I0130 14:04:15.412100 2530 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:04:15.418164 kubelet[2530]: E0130 14:04:15.417979 2530 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:04:15.422035 kubelet[2530]: I0130 14:04:15.418962 2530 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:04:15.483429 kubelet[2530]: I0130 14:04:15.483307 2530 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:04:15.483429 kubelet[2530]: I0130 14:04:15.483340 2530 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:04:15.483650 kubelet[2530]: I0130 14:04:15.483446 2530 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:04:15.484614 kubelet[2530]: I0130 14:04:15.483761 2530 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:04:15.484614 kubelet[2530]: I0130 14:04:15.483786 2530 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:04:15.484614 kubelet[2530]: I0130 14:04:15.483813 2530 policy_none.go:49] "None policy: Start" Jan 30 14:04:15.484855 kubelet[2530]: I0130 14:04:15.484643 2530 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:04:15.484855 kubelet[2530]: I0130 14:04:15.484680 2530 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:04:15.485351 kubelet[2530]: I0130 14:04:15.484964 2530 state_mem.go:75] "Updated machine memory state" Jan 30 14:04:15.493528 kubelet[2530]: I0130 14:04:15.492836 2530 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:04:15.493528 kubelet[2530]: I0130 14:04:15.493238 2530 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:04:15.493528 kubelet[2530]: I0130 14:04:15.493260 2530 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:04:15.494170 kubelet[2530]: I0130 14:04:15.494135 2530 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:04:15.524852 kubelet[2530]: W0130 14:04:15.524725 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 14:04:15.526635 kubelet[2530]: W0130 14:04:15.525206 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 14:04:15.526635 kubelet[2530]: W0130 14:04:15.526102 2530 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 30 14:04:15.614813 kubelet[2530]: I0130 14:04:15.614775 2530 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.626113 kubelet[2530]: I0130 14:04:15.625723 2530 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.626113 kubelet[2530]: I0130 14:04:15.625852 2530 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695135 kubelet[2530]: I0130 14:04:15.694696 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695135 kubelet[2530]: I0130 14:04:15.694762 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695135 kubelet[2530]: I0130 14:04:15.694800 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695135 kubelet[2530]: I0130 14:04:15.694832 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e8a2b6237404a937d4ebaf11b1170d2f-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"e8a2b6237404a937d4ebaf11b1170d2f\") " pod="kube-system/kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695552 kubelet[2530]: I0130 14:04:15.694866 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3b75592b65796de3a8437b5d57c88e8-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"b3b75592b65796de3a8437b5d57c88e8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695552 kubelet[2530]: I0130 14:04:15.694899 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3b75592b65796de3a8437b5d57c88e8-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"b3b75592b65796de3a8437b5d57c88e8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695552 kubelet[2530]: I0130 14:04:15.694931 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3b75592b65796de3a8437b5d57c88e8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"b3b75592b65796de3a8437b5d57c88e8\") " pod="kube-system/kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695552 kubelet[2530]: I0130 14:04:15.694961 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:15.695797 kubelet[2530]: I0130 14:04:15.694993 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/94b86b7991e030bef0da539d716249b4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" (UID: \"94b86b7991e030bef0da539d716249b4\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:16.361914 kubelet[2530]: I0130 14:04:16.361837 2530 apiserver.go:52] "Watching apiserver" Jan 30 14:04:16.394910 kubelet[2530]: I0130 14:04:16.394709 2530 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 14:04:16.500296 kubelet[2530]: I0130 14:04:16.499874 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" podStartSLOduration=1.499827002 podStartE2EDuration="1.499827002s" podCreationTimestamp="2025-01-30 14:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:16.49959665 +0000 UTC m=+1.229477349" watchObservedRunningTime="2025-01-30 14:04:16.499827002 +0000 UTC m=+1.229707675" Jan 30 14:04:16.512345 kubelet[2530]: I0130 14:04:16.511674 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" podStartSLOduration=1.5116489450000001 podStartE2EDuration="1.511648945s" podCreationTimestamp="2025-01-30 14:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:16.510044251 +0000 UTC m=+1.239924949" watchObservedRunningTime="2025-01-30 14:04:16.511648945 +0000 UTC m=+1.241529621" Jan 30 14:04:16.541972 kubelet[2530]: I0130 14:04:16.540428 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" podStartSLOduration=1.540399507 podStartE2EDuration="1.540399507s" podCreationTimestamp="2025-01-30 14:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:16.525079056 +0000 UTC m=+1.254959756" watchObservedRunningTime="2025-01-30 14:04:16.540399507 +0000 UTC m=+1.270280215" Jan 30 14:04:20.579442 kubelet[2530]: I0130 14:04:20.579375 2530 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:04:20.581451 containerd[1470]: time="2025-01-30T14:04:20.581137804Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:04:20.584246 kubelet[2530]: I0130 14:04:20.581611 2530 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:04:21.394492 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 30 14:04:21.429658 systemd[1]: Created slice kubepods-besteffort-podff8d2728_c691_4f4e_ad0e_c20c6e1c2e5c.slice - libcontainer container kubepods-besteffort-podff8d2728_c691_4f4e_ad0e_c20c6e1c2e5c.slice. Jan 30 14:04:21.437002 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:21.445778 systemd[1]: sshd@6-10.128.0.55:22-139.178.68.195:51350.service: Deactivated successfully. Jan 30 14:04:21.448902 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:04:21.449184 systemd[1]: session-7.scope: Consumed 6.916s CPU time, 155.6M memory peak, 0B memory swap peak. Jan 30 14:04:21.451297 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:04:21.453183 systemd-logind[1441]: Removed session 7. Jan 30 14:04:21.530979 kubelet[2530]: I0130 14:04:21.530923 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c-kube-proxy\") pod \"kube-proxy-crjfw\" (UID: \"ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c\") " pod="kube-system/kube-proxy-crjfw" Jan 30 14:04:21.530979 kubelet[2530]: I0130 14:04:21.530988 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c-xtables-lock\") pod \"kube-proxy-crjfw\" (UID: \"ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c\") " pod="kube-system/kube-proxy-crjfw" Jan 30 14:04:21.531233 kubelet[2530]: I0130 14:04:21.531023 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c-lib-modules\") pod \"kube-proxy-crjfw\" (UID: \"ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c\") " pod="kube-system/kube-proxy-crjfw" Jan 30 14:04:21.531233 kubelet[2530]: I0130 14:04:21.531052 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgx6b\" (UniqueName: \"kubernetes.io/projected/ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c-kube-api-access-qgx6b\") pod \"kube-proxy-crjfw\" (UID: \"ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c\") " pod="kube-system/kube-proxy-crjfw" Jan 30 14:04:21.721661 systemd[1]: Created slice kubepods-besteffort-pod3c31d80c_5a3a_45de_b16d_02094ccaa088.slice - libcontainer container kubepods-besteffort-pod3c31d80c_5a3a_45de_b16d_02094ccaa088.slice. Jan 30 14:04:21.743410 containerd[1470]: time="2025-01-30T14:04:21.743315764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crjfw,Uid:ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:21.778239 containerd[1470]: time="2025-01-30T14:04:21.778130233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:21.779168 containerd[1470]: time="2025-01-30T14:04:21.779061771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:21.779168 containerd[1470]: time="2025-01-30T14:04:21.779095589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:21.779462 containerd[1470]: time="2025-01-30T14:04:21.779307569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:21.816859 systemd[1]: Started cri-containerd-1def243adc2638fd6598ce86cd29c63f815c531d21f0fb279cfc68486a1fd6c5.scope - libcontainer container 1def243adc2638fd6598ce86cd29c63f815c531d21f0fb279cfc68486a1fd6c5. Jan 30 14:04:21.833106 kubelet[2530]: I0130 14:04:21.832953 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c31d80c-5a3a-45de-b16d-02094ccaa088-var-lib-calico\") pod \"tigera-operator-76c4976dd7-w9b5b\" (UID: \"3c31d80c-5a3a-45de-b16d-02094ccaa088\") " pod="tigera-operator/tigera-operator-76c4976dd7-w9b5b" Jan 30 14:04:21.833106 kubelet[2530]: I0130 14:04:21.833014 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg2w4\" (UniqueName: \"kubernetes.io/projected/3c31d80c-5a3a-45de-b16d-02094ccaa088-kube-api-access-lg2w4\") pod \"tigera-operator-76c4976dd7-w9b5b\" (UID: \"3c31d80c-5a3a-45de-b16d-02094ccaa088\") " pod="tigera-operator/tigera-operator-76c4976dd7-w9b5b" Jan 30 14:04:21.852839 containerd[1470]: time="2025-01-30T14:04:21.852769728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-crjfw,Uid:ff8d2728-c691-4f4e-ad0e-c20c6e1c2e5c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1def243adc2638fd6598ce86cd29c63f815c531d21f0fb279cfc68486a1fd6c5\"" Jan 30 14:04:21.857251 containerd[1470]: time="2025-01-30T14:04:21.857166019Z" level=info msg="CreateContainer within sandbox \"1def243adc2638fd6598ce86cd29c63f815c531d21f0fb279cfc68486a1fd6c5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:04:21.880784 containerd[1470]: time="2025-01-30T14:04:21.880684127Z" level=info msg="CreateContainer within sandbox \"1def243adc2638fd6598ce86cd29c63f815c531d21f0fb279cfc68486a1fd6c5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3fe5be155568b4320257b2117ebd6fcd63b3fb91463943f9798f484ef051817e\"" Jan 30 14:04:21.881962 containerd[1470]: time="2025-01-30T14:04:21.881515698Z" level=info msg="StartContainer for \"3fe5be155568b4320257b2117ebd6fcd63b3fb91463943f9798f484ef051817e\"" Jan 30 14:04:21.920656 systemd[1]: Started cri-containerd-3fe5be155568b4320257b2117ebd6fcd63b3fb91463943f9798f484ef051817e.scope - libcontainer container 3fe5be155568b4320257b2117ebd6fcd63b3fb91463943f9798f484ef051817e. Jan 30 14:04:21.968963 containerd[1470]: time="2025-01-30T14:04:21.968844649Z" level=info msg="StartContainer for \"3fe5be155568b4320257b2117ebd6fcd63b3fb91463943f9798f484ef051817e\" returns successfully" Jan 30 14:04:22.028249 containerd[1470]: time="2025-01-30T14:04:22.027319791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-w9b5b,Uid:3c31d80c-5a3a-45de-b16d-02094ccaa088,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:04:22.083198 containerd[1470]: time="2025-01-30T14:04:22.083016584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:22.083431 containerd[1470]: time="2025-01-30T14:04:22.083234442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:22.083431 containerd[1470]: time="2025-01-30T14:04:22.083315254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:22.084348 containerd[1470]: time="2025-01-30T14:04:22.083641060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:22.119311 systemd[1]: Started cri-containerd-0be95b81630aec3f4acb272c5a9114ca537cb2fea4f7cb4a632e3c5d9ec6fb8f.scope - libcontainer container 0be95b81630aec3f4acb272c5a9114ca537cb2fea4f7cb4a632e3c5d9ec6fb8f. Jan 30 14:04:22.195061 containerd[1470]: time="2025-01-30T14:04:22.194923251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-w9b5b,Uid:3c31d80c-5a3a-45de-b16d-02094ccaa088,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0be95b81630aec3f4acb272c5a9114ca537cb2fea4f7cb4a632e3c5d9ec6fb8f\"" Jan 30 14:04:22.197760 containerd[1470]: time="2025-01-30T14:04:22.197681424Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:04:23.291060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284331913.mount: Deactivated successfully. Jan 30 14:04:24.143215 containerd[1470]: time="2025-01-30T14:04:24.143153031Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:24.144674 containerd[1470]: time="2025-01-30T14:04:24.144594362Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 14:04:24.146318 containerd[1470]: time="2025-01-30T14:04:24.146241326Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:24.149614 containerd[1470]: time="2025-01-30T14:04:24.149521875Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:24.150640 containerd[1470]: time="2025-01-30T14:04:24.150595089Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.952857474s" Jan 30 14:04:24.150748 containerd[1470]: time="2025-01-30T14:04:24.150645102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 14:04:24.153908 containerd[1470]: time="2025-01-30T14:04:24.153691192Z" level=info msg="CreateContainer within sandbox \"0be95b81630aec3f4acb272c5a9114ca537cb2fea4f7cb4a632e3c5d9ec6fb8f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:04:24.174213 containerd[1470]: time="2025-01-30T14:04:24.174140116Z" level=info msg="CreateContainer within sandbox \"0be95b81630aec3f4acb272c5a9114ca537cb2fea4f7cb4a632e3c5d9ec6fb8f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5a612b9631e69f7d534d8e83257e50255e5b29ecf810acfaad0d30dfb6fd6197\"" Jan 30 14:04:24.175200 containerd[1470]: time="2025-01-30T14:04:24.175139819Z" level=info msg="StartContainer for \"5a612b9631e69f7d534d8e83257e50255e5b29ecf810acfaad0d30dfb6fd6197\"" Jan 30 14:04:24.221404 systemd[1]: run-containerd-runc-k8s.io-5a612b9631e69f7d534d8e83257e50255e5b29ecf810acfaad0d30dfb6fd6197-runc.e1J7s5.mount: Deactivated successfully. Jan 30 14:04:24.230434 systemd[1]: Started cri-containerd-5a612b9631e69f7d534d8e83257e50255e5b29ecf810acfaad0d30dfb6fd6197.scope - libcontainer container 5a612b9631e69f7d534d8e83257e50255e5b29ecf810acfaad0d30dfb6fd6197. Jan 30 14:04:24.270132 containerd[1470]: time="2025-01-30T14:04:24.270069548Z" level=info msg="StartContainer for \"5a612b9631e69f7d534d8e83257e50255e5b29ecf810acfaad0d30dfb6fd6197\" returns successfully" Jan 30 14:04:24.498189 kubelet[2530]: I0130 14:04:24.497791 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-crjfw" podStartSLOduration=3.497763061 podStartE2EDuration="3.497763061s" podCreationTimestamp="2025-01-30 14:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:22.493788411 +0000 UTC m=+7.223669113" watchObservedRunningTime="2025-01-30 14:04:24.497763061 +0000 UTC m=+9.227643761" Jan 30 14:04:24.498189 kubelet[2530]: I0130 14:04:24.497962 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-w9b5b" podStartSLOduration=1.543130165 podStartE2EDuration="3.497950352s" podCreationTimestamp="2025-01-30 14:04:21 +0000 UTC" firstStartedPulling="2025-01-30 14:04:22.197094227 +0000 UTC m=+6.926974914" lastFinishedPulling="2025-01-30 14:04:24.151914422 +0000 UTC m=+8.881795101" observedRunningTime="2025-01-30 14:04:24.497597918 +0000 UTC m=+9.227478616" watchObservedRunningTime="2025-01-30 14:04:24.497950352 +0000 UTC m=+9.227831054" Jan 30 14:04:26.745421 update_engine[1450]: I20250130 14:04:26.743407 1450 update_attempter.cc:509] Updating boot flags... Jan 30 14:04:26.852437 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2909) Jan 30 14:04:27.060433 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2910) Jan 30 14:04:27.228434 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2910) Jan 30 14:04:27.820468 kubelet[2530]: W0130 14:04:27.820425 2530 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' and this object Jan 30 14:04:27.821061 kubelet[2530]: E0130 14:04:27.820499 2530 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jan 30 14:04:27.825906 systemd[1]: Created slice kubepods-besteffort-pod8a631f1e_0145_4d64_989a_9efdd67668f8.slice - libcontainer container kubepods-besteffort-pod8a631f1e_0145_4d64_989a_9efdd67668f8.slice. Jan 30 14:04:27.927442 systemd[1]: Created slice kubepods-besteffort-podc86c3f35_4a1a_42cc_8ba0_c0da14ea4aa0.slice - libcontainer container kubepods-besteffort-podc86c3f35_4a1a_42cc_8ba0_c0da14ea4aa0.slice. Jan 30 14:04:27.981124 kubelet[2530]: I0130 14:04:27.981060 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mv2t\" (UniqueName: \"kubernetes.io/projected/8a631f1e-0145-4d64-989a-9efdd67668f8-kube-api-access-2mv2t\") pod \"calico-typha-86c8cc579c-fmt2c\" (UID: \"8a631f1e-0145-4d64-989a-9efdd67668f8\") " pod="calico-system/calico-typha-86c8cc579c-fmt2c" Jan 30 14:04:27.981323 kubelet[2530]: I0130 14:04:27.981138 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a631f1e-0145-4d64-989a-9efdd67668f8-tigera-ca-bundle\") pod \"calico-typha-86c8cc579c-fmt2c\" (UID: \"8a631f1e-0145-4d64-989a-9efdd67668f8\") " pod="calico-system/calico-typha-86c8cc579c-fmt2c" Jan 30 14:04:27.981323 kubelet[2530]: I0130 14:04:27.981167 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a631f1e-0145-4d64-989a-9efdd67668f8-typha-certs\") pod \"calico-typha-86c8cc579c-fmt2c\" (UID: \"8a631f1e-0145-4d64-989a-9efdd67668f8\") " pod="calico-system/calico-typha-86c8cc579c-fmt2c" Jan 30 14:04:28.066887 kubelet[2530]: E0130 14:04:28.066823 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:28.083509 kubelet[2530]: I0130 14:04:28.082325 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-tigera-ca-bundle\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.083509 kubelet[2530]: I0130 14:04:28.082416 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-cni-bin-dir\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.083509 kubelet[2530]: I0130 14:04:28.082545 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5a144ec1-46fe-4595-a551-f8f4cec9f827-varrun\") pod \"csi-node-driver-92kn2\" (UID: \"5a144ec1-46fe-4595-a551-f8f4cec9f827\") " pod="calico-system/csi-node-driver-92kn2" Jan 30 14:04:28.083509 kubelet[2530]: I0130 14:04:28.082606 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5a144ec1-46fe-4595-a551-f8f4cec9f827-socket-dir\") pod \"csi-node-driver-92kn2\" (UID: \"5a144ec1-46fe-4595-a551-f8f4cec9f827\") " pod="calico-system/csi-node-driver-92kn2" Jan 30 14:04:28.083509 kubelet[2530]: I0130 14:04:28.082641 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xskp4\" (UniqueName: \"kubernetes.io/projected/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-kube-api-access-xskp4\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.083912 kubelet[2530]: I0130 14:04:28.082676 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-xtables-lock\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.083912 kubelet[2530]: I0130 14:04:28.082704 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-cni-net-dir\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.083912 kubelet[2530]: I0130 14:04:28.082734 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a144ec1-46fe-4595-a551-f8f4cec9f827-kubelet-dir\") pod \"csi-node-driver-92kn2\" (UID: \"5a144ec1-46fe-4595-a551-f8f4cec9f827\") " pod="calico-system/csi-node-driver-92kn2" Jan 30 14:04:28.083912 kubelet[2530]: I0130 14:04:28.082758 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-policysync\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.083912 kubelet[2530]: I0130 14:04:28.082805 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-node-certs\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.084180 kubelet[2530]: I0130 14:04:28.082831 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-var-lib-calico\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.084180 kubelet[2530]: I0130 14:04:28.082874 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-lib-modules\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.084180 kubelet[2530]: I0130 14:04:28.082905 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-cni-log-dir\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.084180 kubelet[2530]: I0130 14:04:28.082930 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-flexvol-driver-host\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.084180 kubelet[2530]: I0130 14:04:28.082956 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5a144ec1-46fe-4595-a551-f8f4cec9f827-registration-dir\") pod \"csi-node-driver-92kn2\" (UID: \"5a144ec1-46fe-4595-a551-f8f4cec9f827\") " pod="calico-system/csi-node-driver-92kn2" Jan 30 14:04:28.084478 kubelet[2530]: I0130 14:04:28.083002 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0-var-run-calico\") pod \"calico-node-8r42z\" (UID: \"c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0\") " pod="calico-system/calico-node-8r42z" Jan 30 14:04:28.084478 kubelet[2530]: I0130 14:04:28.083030 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjgw\" (UniqueName: \"kubernetes.io/projected/5a144ec1-46fe-4595-a551-f8f4cec9f827-kube-api-access-rzjgw\") pod \"csi-node-driver-92kn2\" (UID: \"5a144ec1-46fe-4595-a551-f8f4cec9f827\") " pod="calico-system/csi-node-driver-92kn2" Jan 30 14:04:28.209342 kubelet[2530]: E0130 14:04:28.208993 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.209342 kubelet[2530]: W0130 14:04:28.209029 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.209342 kubelet[2530]: E0130 14:04:28.209061 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.210438 kubelet[2530]: E0130 14:04:28.210001 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.210438 kubelet[2530]: W0130 14:04:28.210025 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.210438 kubelet[2530]: E0130 14:04:28.210046 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.212089 kubelet[2530]: E0130 14:04:28.211141 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.212089 kubelet[2530]: W0130 14:04:28.211434 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.212089 kubelet[2530]: E0130 14:04:28.211454 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.215589 kubelet[2530]: E0130 14:04:28.213769 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.215589 kubelet[2530]: W0130 14:04:28.213792 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.215589 kubelet[2530]: E0130 14:04:28.213813 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.233689 kubelet[2530]: E0130 14:04:28.233656 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.233884 kubelet[2530]: W0130 14:04:28.233864 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.234008 kubelet[2530]: E0130 14:04:28.233991 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.241586 containerd[1470]: time="2025-01-30T14:04:28.238594082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8r42z,Uid:c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0,Namespace:calico-system,Attempt:0,}" Jan 30 14:04:28.267430 kubelet[2530]: E0130 14:04:28.262664 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.267430 kubelet[2530]: W0130 14:04:28.262697 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.267430 kubelet[2530]: E0130 14:04:28.262729 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.286711 kubelet[2530]: E0130 14:04:28.286670 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.286711 kubelet[2530]: W0130 14:04:28.286708 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.286954 kubelet[2530]: E0130 14:04:28.286749 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.300728 containerd[1470]: time="2025-01-30T14:04:28.298962769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:28.300728 containerd[1470]: time="2025-01-30T14:04:28.299045976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:28.300728 containerd[1470]: time="2025-01-30T14:04:28.299070739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:28.300728 containerd[1470]: time="2025-01-30T14:04:28.299195587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:28.334676 systemd[1]: Started cri-containerd-5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1.scope - libcontainer container 5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1. Jan 30 14:04:28.385071 containerd[1470]: time="2025-01-30T14:04:28.384932680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8r42z,Uid:c86c3f35-4a1a-42cc-8ba0-c0da14ea4aa0,Namespace:calico-system,Attempt:0,} returns sandbox id \"5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1\"" Jan 30 14:04:28.389332 containerd[1470]: time="2025-01-30T14:04:28.388884167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:04:28.389527 kubelet[2530]: E0130 14:04:28.389408 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.389527 kubelet[2530]: W0130 14:04:28.389437 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.389527 kubelet[2530]: E0130 14:04:28.389464 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.491722 kubelet[2530]: E0130 14:04:28.491536 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.491722 kubelet[2530]: W0130 14:04:28.491591 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.491722 kubelet[2530]: E0130 14:04:28.491624 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.592409 kubelet[2530]: E0130 14:04:28.592244 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.592409 kubelet[2530]: W0130 14:04:28.592274 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.592409 kubelet[2530]: E0130 14:04:28.592302 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.693515 kubelet[2530]: E0130 14:04:28.693471 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.693515 kubelet[2530]: W0130 14:04:28.693507 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.693728 kubelet[2530]: E0130 14:04:28.693536 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.794886 kubelet[2530]: E0130 14:04:28.794833 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.794886 kubelet[2530]: W0130 14:04:28.794865 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.794886 kubelet[2530]: E0130 14:04:28.794895 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.896605 kubelet[2530]: E0130 14:04:28.896466 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.896605 kubelet[2530]: W0130 14:04:28.896500 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.896605 kubelet[2530]: E0130 14:04:28.896529 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:28.998223 kubelet[2530]: E0130 14:04:28.998123 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:28.998467 kubelet[2530]: W0130 14:04:28.998244 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:28.998467 kubelet[2530]: E0130 14:04:28.998290 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:29.050769 kubelet[2530]: E0130 14:04:29.050721 2530 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:29.050769 kubelet[2530]: W0130 14:04:29.050750 2530 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:29.050975 kubelet[2530]: E0130 14:04:29.050782 2530 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:29.333964 containerd[1470]: time="2025-01-30T14:04:29.333914090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c8cc579c-fmt2c,Uid:8a631f1e-0145-4d64-989a-9efdd67668f8,Namespace:calico-system,Attempt:0,}" Jan 30 14:04:29.346725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1565280570.mount: Deactivated successfully. Jan 30 14:04:29.394664 containerd[1470]: time="2025-01-30T14:04:29.393242198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:29.394664 containerd[1470]: time="2025-01-30T14:04:29.393406803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:29.394664 containerd[1470]: time="2025-01-30T14:04:29.393449481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:29.394664 containerd[1470]: time="2025-01-30T14:04:29.393747971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:29.402858 kubelet[2530]: E0130 14:04:29.401084 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:29.446046 systemd[1]: Started cri-containerd-0508444822dbef016723a6ac334d1f65f3cbd44c95e0c6d2e94ba7a74b056603.scope - libcontainer container 0508444822dbef016723a6ac334d1f65f3cbd44c95e0c6d2e94ba7a74b056603. Jan 30 14:04:29.547904 containerd[1470]: time="2025-01-30T14:04:29.547852442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86c8cc579c-fmt2c,Uid:8a631f1e-0145-4d64-989a-9efdd67668f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"0508444822dbef016723a6ac334d1f65f3cbd44c95e0c6d2e94ba7a74b056603\"" Jan 30 14:04:29.622434 containerd[1470]: time="2025-01-30T14:04:29.622238703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:29.624240 containerd[1470]: time="2025-01-30T14:04:29.624138925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 14:04:29.626017 containerd[1470]: time="2025-01-30T14:04:29.625954414Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:29.630707 containerd[1470]: time="2025-01-30T14:04:29.630628155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:29.631842 containerd[1470]: time="2025-01-30T14:04:29.631646447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.242707907s" Jan 30 14:04:29.631842 containerd[1470]: time="2025-01-30T14:04:29.631698562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 14:04:29.634387 containerd[1470]: time="2025-01-30T14:04:29.634223794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:04:29.636102 containerd[1470]: time="2025-01-30T14:04:29.636045384Z" level=info msg="CreateContainer within sandbox \"5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:04:29.655569 containerd[1470]: time="2025-01-30T14:04:29.655422422Z" level=info msg="CreateContainer within sandbox \"5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100\"" Jan 30 14:04:29.658395 containerd[1470]: time="2025-01-30T14:04:29.656346199Z" level=info msg="StartContainer for \"b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100\"" Jan 30 14:04:29.712589 systemd[1]: Started cri-containerd-b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100.scope - libcontainer container b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100. Jan 30 14:04:29.763647 containerd[1470]: time="2025-01-30T14:04:29.763571950Z" level=info msg="StartContainer for \"b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100\" returns successfully" Jan 30 14:04:29.784321 systemd[1]: cri-containerd-b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100.scope: Deactivated successfully. Jan 30 14:04:30.420675 containerd[1470]: time="2025-01-30T14:04:30.420583514Z" level=info msg="shim disconnected" id=b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100 namespace=k8s.io Jan 30 14:04:30.421437 containerd[1470]: time="2025-01-30T14:04:30.420683587Z" level=warning msg="cleaning up after shim disconnected" id=b6040db20feeafa4c2381d018eae1a1c34160b41edf1baba8d0019e740e45100 namespace=k8s.io Jan 30 14:04:30.421437 containerd[1470]: time="2025-01-30T14:04:30.420722287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:31.400458 kubelet[2530]: E0130 14:04:31.399903 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:32.420658 containerd[1470]: time="2025-01-30T14:04:32.420592174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:32.422092 containerd[1470]: time="2025-01-30T14:04:32.422011269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 14:04:32.423612 containerd[1470]: time="2025-01-30T14:04:32.423542610Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:32.427522 containerd[1470]: time="2025-01-30T14:04:32.427454633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:32.428454 containerd[1470]: time="2025-01-30T14:04:32.428405165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.794123325s" Jan 30 14:04:32.428889 containerd[1470]: time="2025-01-30T14:04:32.428468467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 14:04:32.430230 containerd[1470]: time="2025-01-30T14:04:32.430195246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:04:32.443864 containerd[1470]: time="2025-01-30T14:04:32.443661172Z" level=info msg="CreateContainer within sandbox \"0508444822dbef016723a6ac334d1f65f3cbd44c95e0c6d2e94ba7a74b056603\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:04:32.470981 containerd[1470]: time="2025-01-30T14:04:32.470871987Z" level=info msg="CreateContainer within sandbox \"0508444822dbef016723a6ac334d1f65f3cbd44c95e0c6d2e94ba7a74b056603\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"23b02c168ce08e19fddbe14b4493a87975a3719020ac8d21726e18b32f8b55da\"" Jan 30 14:04:32.471829 containerd[1470]: time="2025-01-30T14:04:32.471736982Z" level=info msg="StartContainer for \"23b02c168ce08e19fddbe14b4493a87975a3719020ac8d21726e18b32f8b55da\"" Jan 30 14:04:32.524675 systemd[1]: Started cri-containerd-23b02c168ce08e19fddbe14b4493a87975a3719020ac8d21726e18b32f8b55da.scope - libcontainer container 23b02c168ce08e19fddbe14b4493a87975a3719020ac8d21726e18b32f8b55da. Jan 30 14:04:32.583695 containerd[1470]: time="2025-01-30T14:04:32.583221138Z" level=info msg="StartContainer for \"23b02c168ce08e19fddbe14b4493a87975a3719020ac8d21726e18b32f8b55da\" returns successfully" Jan 30 14:04:33.404553 kubelet[2530]: E0130 14:04:33.402805 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:33.567016 kubelet[2530]: I0130 14:04:33.566894 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86c8cc579c-fmt2c" podStartSLOduration=3.689404432 podStartE2EDuration="6.566866615s" podCreationTimestamp="2025-01-30 14:04:27 +0000 UTC" firstStartedPulling="2025-01-30 14:04:29.552545762 +0000 UTC m=+14.282426449" lastFinishedPulling="2025-01-30 14:04:32.430007936 +0000 UTC m=+17.159888632" observedRunningTime="2025-01-30 14:04:33.564316122 +0000 UTC m=+18.294196824" watchObservedRunningTime="2025-01-30 14:04:33.566866615 +0000 UTC m=+18.296747311" Jan 30 14:04:34.542435 kubelet[2530]: I0130 14:04:34.542395 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:04:35.402202 kubelet[2530]: E0130 14:04:35.401706 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:36.337504 containerd[1470]: time="2025-01-30T14:04:36.337432087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:36.339018 containerd[1470]: time="2025-01-30T14:04:36.338949609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 14:04:36.340995 containerd[1470]: time="2025-01-30T14:04:36.340905616Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:36.346109 containerd[1470]: time="2025-01-30T14:04:36.346015954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:36.347304 containerd[1470]: time="2025-01-30T14:04:36.347108862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.916866449s" Jan 30 14:04:36.347304 containerd[1470]: time="2025-01-30T14:04:36.347165064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 14:04:36.359402 containerd[1470]: time="2025-01-30T14:04:36.357498749Z" level=info msg="CreateContainer within sandbox \"5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:04:36.388915 containerd[1470]: time="2025-01-30T14:04:36.388864471Z" level=info msg="CreateContainer within sandbox \"5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560\"" Jan 30 14:04:36.390822 containerd[1470]: time="2025-01-30T14:04:36.390764207Z" level=info msg="StartContainer for \"955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560\"" Jan 30 14:04:36.447662 systemd[1]: Started cri-containerd-955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560.scope - libcontainer container 955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560. Jan 30 14:04:36.512998 containerd[1470]: time="2025-01-30T14:04:36.512944185Z" level=info msg="StartContainer for \"955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560\" returns successfully" Jan 30 14:04:37.400128 kubelet[2530]: E0130 14:04:37.399673 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:37.563101 systemd[1]: cri-containerd-955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560.scope: Deactivated successfully. Jan 30 14:04:37.596056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560-rootfs.mount: Deactivated successfully. Jan 30 14:04:37.608020 kubelet[2530]: I0130 14:04:37.606801 2530 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 14:04:37.663508 systemd[1]: Created slice kubepods-burstable-pod7ec8df9f_3ea0_4291_8547_137f7df6ece5.slice - libcontainer container kubepods-burstable-pod7ec8df9f_3ea0_4291_8547_137f7df6ece5.slice. Jan 30 14:04:37.683334 systemd[1]: Created slice kubepods-burstable-podf66963dd_f3cb_428f_babc_4d8723f64706.slice - libcontainer container kubepods-burstable-podf66963dd_f3cb_428f_babc_4d8723f64706.slice. Jan 30 14:04:37.696452 systemd[1]: Created slice kubepods-besteffort-poded528f1e_84ec_4c23_bd0c_158afa9a4b29.slice - libcontainer container kubepods-besteffort-poded528f1e_84ec_4c23_bd0c_158afa9a4b29.slice. Jan 30 14:04:37.708670 systemd[1]: Created slice kubepods-besteffort-pod7023258d_44a7_4f54_855a_e497a3b14836.slice - libcontainer container kubepods-besteffort-pod7023258d_44a7_4f54_855a_e497a3b14836.slice. Jan 30 14:04:37.722927 systemd[1]: Created slice kubepods-besteffort-pod768db4e0_04b4_4bca_96da_7fc689135d38.slice - libcontainer container kubepods-besteffort-pod768db4e0_04b4_4bca_96da_7fc689135d38.slice. Jan 30 14:04:37.757992 kubelet[2530]: I0130 14:04:37.757489 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c78gz\" (UniqueName: \"kubernetes.io/projected/f66963dd-f3cb-428f-babc-4d8723f64706-kube-api-access-c78gz\") pod \"coredns-6f6b679f8f-vncj6\" (UID: \"f66963dd-f3cb-428f-babc-4d8723f64706\") " pod="kube-system/coredns-6f6b679f8f-vncj6" Jan 30 14:04:37.757992 kubelet[2530]: I0130 14:04:37.757561 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-988r2\" (UniqueName: \"kubernetes.io/projected/7023258d-44a7-4f54-855a-e497a3b14836-kube-api-access-988r2\") pod \"calico-apiserver-7dc6458667-6bqgk\" (UID: \"7023258d-44a7-4f54-855a-e497a3b14836\") " pod="calico-apiserver/calico-apiserver-7dc6458667-6bqgk" Jan 30 14:04:37.757992 kubelet[2530]: I0130 14:04:37.757594 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed528f1e-84ec-4c23-bd0c-158afa9a4b29-tigera-ca-bundle\") pod \"calico-kube-controllers-764f8fb56f-czn89\" (UID: \"ed528f1e-84ec-4c23-bd0c-158afa9a4b29\") " pod="calico-system/calico-kube-controllers-764f8fb56f-czn89" Jan 30 14:04:37.757992 kubelet[2530]: I0130 14:04:37.757622 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfphr\" (UniqueName: \"kubernetes.io/projected/7ec8df9f-3ea0-4291-8547-137f7df6ece5-kube-api-access-wfphr\") pod \"coredns-6f6b679f8f-gp6sd\" (UID: \"7ec8df9f-3ea0-4291-8547-137f7df6ece5\") " pod="kube-system/coredns-6f6b679f8f-gp6sd" Jan 30 14:04:37.757992 kubelet[2530]: I0130 14:04:37.757658 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ec8df9f-3ea0-4291-8547-137f7df6ece5-config-volume\") pod \"coredns-6f6b679f8f-gp6sd\" (UID: \"7ec8df9f-3ea0-4291-8547-137f7df6ece5\") " pod="kube-system/coredns-6f6b679f8f-gp6sd" Jan 30 14:04:37.790348 kubelet[2530]: I0130 14:04:37.757689 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsm2p\" (UniqueName: \"kubernetes.io/projected/ed528f1e-84ec-4c23-bd0c-158afa9a4b29-kube-api-access-vsm2p\") pod \"calico-kube-controllers-764f8fb56f-czn89\" (UID: \"ed528f1e-84ec-4c23-bd0c-158afa9a4b29\") " pod="calico-system/calico-kube-controllers-764f8fb56f-czn89" Jan 30 14:04:37.790348 kubelet[2530]: I0130 14:04:37.757722 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f66963dd-f3cb-428f-babc-4d8723f64706-config-volume\") pod \"coredns-6f6b679f8f-vncj6\" (UID: \"f66963dd-f3cb-428f-babc-4d8723f64706\") " pod="kube-system/coredns-6f6b679f8f-vncj6" Jan 30 14:04:37.790348 kubelet[2530]: I0130 14:04:37.757753 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/768db4e0-04b4-4bca-96da-7fc689135d38-calico-apiserver-certs\") pod \"calico-apiserver-7dc6458667-hv869\" (UID: \"768db4e0-04b4-4bca-96da-7fc689135d38\") " pod="calico-apiserver/calico-apiserver-7dc6458667-hv869" Jan 30 14:04:37.790348 kubelet[2530]: I0130 14:04:37.757783 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9qx5\" (UniqueName: \"kubernetes.io/projected/768db4e0-04b4-4bca-96da-7fc689135d38-kube-api-access-c9qx5\") pod \"calico-apiserver-7dc6458667-hv869\" (UID: \"768db4e0-04b4-4bca-96da-7fc689135d38\") " pod="calico-apiserver/calico-apiserver-7dc6458667-hv869" Jan 30 14:04:37.790348 kubelet[2530]: I0130 14:04:37.757812 2530 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7023258d-44a7-4f54-855a-e497a3b14836-calico-apiserver-certs\") pod \"calico-apiserver-7dc6458667-6bqgk\" (UID: \"7023258d-44a7-4f54-855a-e497a3b14836\") " pod="calico-apiserver/calico-apiserver-7dc6458667-6bqgk" Jan 30 14:04:37.973622 containerd[1470]: time="2025-01-30T14:04:37.973561789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gp6sd,Uid:7ec8df9f-3ea0-4291-8547-137f7df6ece5,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:37.990807 containerd[1470]: time="2025-01-30T14:04:37.990752879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vncj6,Uid:f66963dd-f3cb-428f-babc-4d8723f64706,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:38.007129 containerd[1470]: time="2025-01-30T14:04:38.007056090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f8fb56f-czn89,Uid:ed528f1e-84ec-4c23-bd0c-158afa9a4b29,Namespace:calico-system,Attempt:0,}" Jan 30 14:04:38.018353 containerd[1470]: time="2025-01-30T14:04:38.018237072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-6bqgk,Uid:7023258d-44a7-4f54-855a-e497a3b14836,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:04:38.029729 containerd[1470]: time="2025-01-30T14:04:38.029626153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-hv869,Uid:768db4e0-04b4-4bca-96da-7fc689135d38,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:04:38.643143 containerd[1470]: time="2025-01-30T14:04:38.643062739Z" level=info msg="shim disconnected" id=955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560 namespace=k8s.io Jan 30 14:04:38.643714 containerd[1470]: time="2025-01-30T14:04:38.643271594Z" level=warning msg="cleaning up after shim disconnected" id=955b0f1a2de6e0a155183b1eda4b5daa5de4a89a3d0424ed1c82d391c6bdc560 namespace=k8s.io Jan 30 14:04:38.643714 containerd[1470]: time="2025-01-30T14:04:38.643297514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:38.925506 containerd[1470]: time="2025-01-30T14:04:38.924620233Z" level=error msg="Failed to destroy network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.928049 containerd[1470]: time="2025-01-30T14:04:38.927794389Z" level=error msg="encountered an error cleaning up failed sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.928253 containerd[1470]: time="2025-01-30T14:04:38.927847636Z" level=error msg="Failed to destroy network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.928918 containerd[1470]: time="2025-01-30T14:04:38.928857859Z" level=error msg="encountered an error cleaning up failed sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.929033 containerd[1470]: time="2025-01-30T14:04:38.928969353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vncj6,Uid:f66963dd-f3cb-428f-babc-4d8723f64706,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.929327 kubelet[2530]: E0130 14:04:38.929271 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.931908 kubelet[2530]: E0130 14:04:38.929407 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vncj6" Jan 30 14:04:38.931908 kubelet[2530]: E0130 14:04:38.929440 2530 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-vncj6" Jan 30 14:04:38.931908 kubelet[2530]: E0130 14:04:38.929508 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-vncj6_kube-system(f66963dd-f3cb-428f-babc-4d8723f64706)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-vncj6_kube-system(f66963dd-f3cb-428f-babc-4d8723f64706)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vncj6" podUID="f66963dd-f3cb-428f-babc-4d8723f64706" Jan 30 14:04:38.932323 containerd[1470]: time="2025-01-30T14:04:38.929935992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gp6sd,Uid:7ec8df9f-3ea0-4291-8547-137f7df6ece5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.932435 kubelet[2530]: E0130 14:04:38.930498 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.932435 kubelet[2530]: E0130 14:04:38.930562 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gp6sd" Jan 30 14:04:38.932435 kubelet[2530]: E0130 14:04:38.930601 2530 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gp6sd" Jan 30 14:04:38.932610 kubelet[2530]: E0130 14:04:38.930650 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gp6sd_kube-system(7ec8df9f-3ea0-4291-8547-137f7df6ece5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gp6sd_kube-system(7ec8df9f-3ea0-4291-8547-137f7df6ece5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gp6sd" podUID="7ec8df9f-3ea0-4291-8547-137f7df6ece5" Jan 30 14:04:38.973119 containerd[1470]: time="2025-01-30T14:04:38.972452619Z" level=error msg="Failed to destroy network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.973119 containerd[1470]: time="2025-01-30T14:04:38.972907040Z" level=error msg="encountered an error cleaning up failed sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.973119 containerd[1470]: time="2025-01-30T14:04:38.972980188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-6bqgk,Uid:7023258d-44a7-4f54-855a-e497a3b14836,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.974216 kubelet[2530]: E0130 14:04:38.973679 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.974216 kubelet[2530]: E0130 14:04:38.973765 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc6458667-6bqgk" Jan 30 14:04:38.974216 kubelet[2530]: E0130 14:04:38.973797 2530 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc6458667-6bqgk" Jan 30 14:04:38.974903 kubelet[2530]: E0130 14:04:38.973859 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc6458667-6bqgk_calico-apiserver(7023258d-44a7-4f54-855a-e497a3b14836)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc6458667-6bqgk_calico-apiserver(7023258d-44a7-4f54-855a-e497a3b14836)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc6458667-6bqgk" podUID="7023258d-44a7-4f54-855a-e497a3b14836" Jan 30 14:04:38.982893 containerd[1470]: time="2025-01-30T14:04:38.982808937Z" level=error msg="Failed to destroy network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.983870 containerd[1470]: time="2025-01-30T14:04:38.983414005Z" level=error msg="encountered an error cleaning up failed sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.983870 containerd[1470]: time="2025-01-30T14:04:38.983526877Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-hv869,Uid:768db4e0-04b4-4bca-96da-7fc689135d38,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.984183 kubelet[2530]: E0130 14:04:38.983841 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.984183 kubelet[2530]: E0130 14:04:38.983942 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc6458667-hv869" Jan 30 14:04:38.984183 kubelet[2530]: E0130 14:04:38.983975 2530 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7dc6458667-hv869" Jan 30 14:04:38.985540 kubelet[2530]: E0130 14:04:38.984055 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7dc6458667-hv869_calico-apiserver(768db4e0-04b4-4bca-96da-7fc689135d38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7dc6458667-hv869_calico-apiserver(768db4e0-04b4-4bca-96da-7fc689135d38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc6458667-hv869" podUID="768db4e0-04b4-4bca-96da-7fc689135d38" Jan 30 14:04:38.989665 containerd[1470]: time="2025-01-30T14:04:38.989616045Z" level=error msg="Failed to destroy network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.990063 containerd[1470]: time="2025-01-30T14:04:38.990015863Z" level=error msg="encountered an error cleaning up failed sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.990188 containerd[1470]: time="2025-01-30T14:04:38.990099235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f8fb56f-czn89,Uid:ed528f1e-84ec-4c23-bd0c-158afa9a4b29,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.990452 kubelet[2530]: E0130 14:04:38.990405 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:38.990557 kubelet[2530]: E0130 14:04:38.990482 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764f8fb56f-czn89" Jan 30 14:04:38.990557 kubelet[2530]: E0130 14:04:38.990540 2530 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-764f8fb56f-czn89" Jan 30 14:04:38.990781 kubelet[2530]: E0130 14:04:38.990619 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-764f8fb56f-czn89_calico-system(ed528f1e-84ec-4c23-bd0c-158afa9a4b29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-764f8fb56f-czn89_calico-system(ed528f1e-84ec-4c23-bd0c-158afa9a4b29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764f8fb56f-czn89" podUID="ed528f1e-84ec-4c23-bd0c-158afa9a4b29" Jan 30 14:04:39.367169 kubelet[2530]: I0130 14:04:39.366856 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:04:39.417028 systemd[1]: Created slice kubepods-besteffort-pod5a144ec1_46fe_4595_a551_f8f4cec9f827.slice - libcontainer container kubepods-besteffort-pod5a144ec1_46fe_4595_a551_f8f4cec9f827.slice. Jan 30 14:04:39.420735 containerd[1470]: time="2025-01-30T14:04:39.420683152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92kn2,Uid:5a144ec1-46fe-4595-a551-f8f4cec9f827,Namespace:calico-system,Attempt:0,}" Jan 30 14:04:39.499309 containerd[1470]: time="2025-01-30T14:04:39.499230461Z" level=error msg="Failed to destroy network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.499815 containerd[1470]: time="2025-01-30T14:04:39.499732549Z" level=error msg="encountered an error cleaning up failed sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.499947 containerd[1470]: time="2025-01-30T14:04:39.499859092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92kn2,Uid:5a144ec1-46fe-4595-a551-f8f4cec9f827,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.500272 kubelet[2530]: E0130 14:04:39.500214 2530 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.500395 kubelet[2530]: E0130 14:04:39.500296 2530 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-92kn2" Jan 30 14:04:39.500395 kubelet[2530]: E0130 14:04:39.500337 2530 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-92kn2" Jan 30 14:04:39.500633 kubelet[2530]: E0130 14:04:39.500458 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-92kn2_calico-system(5a144ec1-46fe-4595-a551-f8f4cec9f827)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-92kn2_calico-system(5a144ec1-46fe-4595-a551-f8f4cec9f827)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:39.560406 kubelet[2530]: I0130 14:04:39.560047 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:04:39.561729 containerd[1470]: time="2025-01-30T14:04:39.561491194Z" level=info msg="StopPodSandbox for \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\"" Jan 30 14:04:39.562379 containerd[1470]: time="2025-01-30T14:04:39.562175538Z" level=info msg="Ensure that sandbox ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1 in task-service has been cleanup successfully" Jan 30 14:04:39.573441 containerd[1470]: time="2025-01-30T14:04:39.572828732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:04:39.573618 kubelet[2530]: I0130 14:04:39.573329 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:04:39.585211 containerd[1470]: time="2025-01-30T14:04:39.585161006Z" level=info msg="StopPodSandbox for \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\"" Jan 30 14:04:39.587174 containerd[1470]: time="2025-01-30T14:04:39.586962281Z" level=info msg="Ensure that sandbox cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d in task-service has been cleanup successfully" Jan 30 14:04:39.597712 kubelet[2530]: I0130 14:04:39.597497 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:04:39.601638 containerd[1470]: time="2025-01-30T14:04:39.601569617Z" level=info msg="StopPodSandbox for \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\"" Jan 30 14:04:39.602611 containerd[1470]: time="2025-01-30T14:04:39.601866979Z" level=info msg="Ensure that sandbox d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf in task-service has been cleanup successfully" Jan 30 14:04:39.613317 kubelet[2530]: I0130 14:04:39.611575 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:04:39.613701 containerd[1470]: time="2025-01-30T14:04:39.612612245Z" level=info msg="StopPodSandbox for \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\"" Jan 30 14:04:39.613701 containerd[1470]: time="2025-01-30T14:04:39.612840649Z" level=info msg="Ensure that sandbox 976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a in task-service has been cleanup successfully" Jan 30 14:04:39.617707 kubelet[2530]: I0130 14:04:39.617585 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:04:39.619686 containerd[1470]: time="2025-01-30T14:04:39.619554139Z" level=info msg="StopPodSandbox for \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\"" Jan 30 14:04:39.620108 containerd[1470]: time="2025-01-30T14:04:39.619975325Z" level=info msg="Ensure that sandbox 9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238 in task-service has been cleanup successfully" Jan 30 14:04:39.624843 kubelet[2530]: I0130 14:04:39.624810 2530 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:04:39.629008 containerd[1470]: time="2025-01-30T14:04:39.628950831Z" level=info msg="StopPodSandbox for \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\"" Jan 30 14:04:39.630376 containerd[1470]: time="2025-01-30T14:04:39.630094218Z" level=info msg="Ensure that sandbox 2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901 in task-service has been cleanup successfully" Jan 30 14:04:39.680389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1-shm.mount: Deactivated successfully. Jan 30 14:04:39.681656 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238-shm.mount: Deactivated successfully. Jan 30 14:04:39.681776 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901-shm.mount: Deactivated successfully. Jan 30 14:04:39.717927 containerd[1470]: time="2025-01-30T14:04:39.717847685Z" level=error msg="StopPodSandbox for \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\" failed" error="failed to destroy network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.718579 kubelet[2530]: E0130 14:04:39.718291 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:04:39.718579 kubelet[2530]: E0130 14:04:39.718389 2530 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1"} Jan 30 14:04:39.718579 kubelet[2530]: E0130 14:04:39.718486 2530 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed528f1e-84ec-4c23-bd0c-158afa9a4b29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:39.718579 kubelet[2530]: E0130 14:04:39.718527 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed528f1e-84ec-4c23-bd0c-158afa9a4b29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-764f8fb56f-czn89" podUID="ed528f1e-84ec-4c23-bd0c-158afa9a4b29" Jan 30 14:04:39.760997 containerd[1470]: time="2025-01-30T14:04:39.760912612Z" level=error msg="StopPodSandbox for \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\" failed" error="failed to destroy network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.761795 kubelet[2530]: E0130 14:04:39.761559 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:04:39.761795 kubelet[2530]: E0130 14:04:39.761627 2530 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf"} Jan 30 14:04:39.761795 kubelet[2530]: E0130 14:04:39.761681 2530 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"768db4e0-04b4-4bca-96da-7fc689135d38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:39.761795 kubelet[2530]: E0130 14:04:39.761718 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"768db4e0-04b4-4bca-96da-7fc689135d38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc6458667-hv869" podUID="768db4e0-04b4-4bca-96da-7fc689135d38" Jan 30 14:04:39.781790 containerd[1470]: time="2025-01-30T14:04:39.780628616Z" level=error msg="StopPodSandbox for \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\" failed" error="failed to destroy network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.781970 kubelet[2530]: E0130 14:04:39.781235 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:04:39.781970 kubelet[2530]: E0130 14:04:39.781293 2530 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d"} Jan 30 14:04:39.781970 kubelet[2530]: E0130 14:04:39.781342 2530 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a144ec1-46fe-4595-a551-f8f4cec9f827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:39.781970 kubelet[2530]: E0130 14:04:39.781394 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a144ec1-46fe-4595-a551-f8f4cec9f827\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-92kn2" podUID="5a144ec1-46fe-4595-a551-f8f4cec9f827" Jan 30 14:04:39.796756 containerd[1470]: time="2025-01-30T14:04:39.796424844Z" level=error msg="StopPodSandbox for \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\" failed" error="failed to destroy network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.796925 kubelet[2530]: E0130 14:04:39.796748 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:04:39.796925 kubelet[2530]: E0130 14:04:39.796813 2530 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238"} Jan 30 14:04:39.796925 kubelet[2530]: E0130 14:04:39.796860 2530 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f66963dd-f3cb-428f-babc-4d8723f64706\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:39.796925 kubelet[2530]: E0130 14:04:39.796893 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f66963dd-f3cb-428f-babc-4d8723f64706\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-vncj6" podUID="f66963dd-f3cb-428f-babc-4d8723f64706" Jan 30 14:04:39.798591 containerd[1470]: time="2025-01-30T14:04:39.798507255Z" level=error msg="StopPodSandbox for \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\" failed" error="failed to destroy network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.798803 containerd[1470]: time="2025-01-30T14:04:39.798511488Z" level=error msg="StopPodSandbox for \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\" failed" error="failed to destroy network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:39.798903 kubelet[2530]: E0130 14:04:39.798804 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:04:39.798903 kubelet[2530]: E0130 14:04:39.798855 2530 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901"} Jan 30 14:04:39.799018 kubelet[2530]: E0130 14:04:39.798907 2530 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ec8df9f-3ea0-4291-8547-137f7df6ece5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:39.799018 kubelet[2530]: E0130 14:04:39.798942 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ec8df9f-3ea0-4291-8547-137f7df6ece5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gp6sd" podUID="7ec8df9f-3ea0-4291-8547-137f7df6ece5" Jan 30 14:04:39.799301 kubelet[2530]: E0130 14:04:39.799025 2530 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:04:39.799301 kubelet[2530]: E0130 14:04:39.799068 2530 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a"} Jan 30 14:04:39.799301 kubelet[2530]: E0130 14:04:39.799104 2530 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7023258d-44a7-4f54-855a-e497a3b14836\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:39.799301 kubelet[2530]: E0130 14:04:39.799132 2530 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7023258d-44a7-4f54-855a-e497a3b14836\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7dc6458667-6bqgk" podUID="7023258d-44a7-4f54-855a-e497a3b14836" Jan 30 14:04:46.174843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117513043.mount: Deactivated successfully. Jan 30 14:04:46.222016 containerd[1470]: time="2025-01-30T14:04:46.221940309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:46.223408 containerd[1470]: time="2025-01-30T14:04:46.223261786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 14:04:46.224960 containerd[1470]: time="2025-01-30T14:04:46.224888948Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:46.228132 containerd[1470]: time="2025-01-30T14:04:46.228056792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:46.229544 containerd[1470]: time="2025-01-30T14:04:46.228890811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.656004015s" Jan 30 14:04:46.229544 containerd[1470]: time="2025-01-30T14:04:46.229209955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 14:04:46.248988 containerd[1470]: time="2025-01-30T14:04:46.248918822Z" level=info msg="CreateContainer within sandbox \"5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:04:46.279681 containerd[1470]: time="2025-01-30T14:04:46.279610501Z" level=info msg="CreateContainer within sandbox \"5981d75b2042e2b8bf919f62ac75084f1854584975001ee4a18e8124ba27f0c1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"25b273f6838f62e4137eb8e7015d429023fa252c2888906bc10a8f8bc297095b\"" Jan 30 14:04:46.280822 containerd[1470]: time="2025-01-30T14:04:46.280457494Z" level=info msg="StartContainer for \"25b273f6838f62e4137eb8e7015d429023fa252c2888906bc10a8f8bc297095b\"" Jan 30 14:04:46.323746 systemd[1]: Started cri-containerd-25b273f6838f62e4137eb8e7015d429023fa252c2888906bc10a8f8bc297095b.scope - libcontainer container 25b273f6838f62e4137eb8e7015d429023fa252c2888906bc10a8f8bc297095b. Jan 30 14:04:46.367257 containerd[1470]: time="2025-01-30T14:04:46.366979653Z" level=info msg="StartContainer for \"25b273f6838f62e4137eb8e7015d429023fa252c2888906bc10a8f8bc297095b\" returns successfully" Jan 30 14:04:46.477868 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:04:46.478049 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:04:48.340500 kernel: bpftool[3748]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:04:48.621024 systemd-networkd[1371]: vxlan.calico: Link UP Jan 30 14:04:48.621039 systemd-networkd[1371]: vxlan.calico: Gained carrier Jan 30 14:04:49.774853 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jan 30 14:04:50.400200 containerd[1470]: time="2025-01-30T14:04:50.400009901Z" level=info msg="StopPodSandbox for \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\"" Jan 30 14:04:50.463419 kubelet[2530]: I0130 14:04:50.463317 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8r42z" podStartSLOduration=5.6213585550000005 podStartE2EDuration="23.463285755s" podCreationTimestamp="2025-01-30 14:04:27 +0000 UTC" firstStartedPulling="2025-01-30 14:04:28.388215657 +0000 UTC m=+13.118096334" lastFinishedPulling="2025-01-30 14:04:46.23014284 +0000 UTC m=+30.960023534" observedRunningTime="2025-01-30 14:04:46.699170002 +0000 UTC m=+31.429050709" watchObservedRunningTime="2025-01-30 14:04:50.463285755 +0000 UTC m=+35.193166481" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.466 [INFO][3842] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.466 [INFO][3842] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" iface="eth0" netns="/var/run/netns/cni-af6a8a38-5023-f519-6659-68a096b631b9" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.467 [INFO][3842] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" iface="eth0" netns="/var/run/netns/cni-af6a8a38-5023-f519-6659-68a096b631b9" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.467 [INFO][3842] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" iface="eth0" netns="/var/run/netns/cni-af6a8a38-5023-f519-6659-68a096b631b9" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.467 [INFO][3842] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.467 [INFO][3842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.494 [INFO][3849] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.494 [INFO][3849] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.494 [INFO][3849] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.503 [WARNING][3849] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.503 [INFO][3849] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.506 [INFO][3849] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:50.513394 containerd[1470]: 2025-01-30 14:04:50.510 [INFO][3842] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:04:50.514413 containerd[1470]: time="2025-01-30T14:04:50.514252309Z" level=info msg="TearDown network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\" successfully" Jan 30 14:04:50.514413 containerd[1470]: time="2025-01-30T14:04:50.514296847Z" level=info msg="StopPodSandbox for \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\" returns successfully" Jan 30 14:04:50.517282 systemd[1]: run-netns-cni\x2daf6a8a38\x2d5023\x2df519\x2d6659\x2d68a096b631b9.mount: Deactivated successfully. Jan 30 14:04:50.518010 containerd[1470]: time="2025-01-30T14:04:50.517633971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92kn2,Uid:5a144ec1-46fe-4595-a551-f8f4cec9f827,Namespace:calico-system,Attempt:1,}" Jan 30 14:04:50.693865 systemd-networkd[1371]: calic4b9fcf6e67: Link UP Jan 30 14:04:50.695506 systemd-networkd[1371]: calic4b9fcf6e67: Gained carrier Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.586 [INFO][3856] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0 csi-node-driver- calico-system 5a144ec1-46fe-4595-a551-f8f4cec9f827 767 0 2025-01-30 14:04:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal csi-node-driver-92kn2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4b9fcf6e67 [] []}} ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.587 [INFO][3856] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.623 [INFO][3867] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" HandleID="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.637 [INFO][3867] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" HandleID="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", "pod":"csi-node-driver-92kn2", "timestamp":"2025-01-30 14:04:50.623265964 +0000 UTC"}, Hostname:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.637 [INFO][3867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.637 [INFO][3867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.637 [INFO][3867] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.643 [INFO][3867] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.649 [INFO][3867] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.661 [INFO][3867] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.664 [INFO][3867] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.669 [INFO][3867] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.669 [INFO][3867] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.671 [INFO][3867] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959 Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.676 [INFO][3867] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.685 [INFO][3867] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.65/26] block=192.168.67.64/26 handle="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.685 [INFO][3867] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.65/26] handle="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.685 [INFO][3867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:50.729196 containerd[1470]: 2025-01-30 14:04:50.686 [INFO][3867] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.65/26] IPv6=[] ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" HandleID="k8s-pod-network.2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.731172 containerd[1470]: 2025-01-30 14:04:50.689 [INFO][3856] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a144ec1-46fe-4595-a551-f8f4cec9f827", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-92kn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4b9fcf6e67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:50.731172 containerd[1470]: 2025-01-30 14:04:50.689 [INFO][3856] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.65/32] ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.731172 containerd[1470]: 2025-01-30 14:04:50.689 [INFO][3856] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4b9fcf6e67 ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.731172 containerd[1470]: 2025-01-30 14:04:50.693 [INFO][3856] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.731172 containerd[1470]: 2025-01-30 14:04:50.694 [INFO][3856] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a144ec1-46fe-4595-a551-f8f4cec9f827", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959", Pod:"csi-node-driver-92kn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4b9fcf6e67", MAC:"22:a0:da:3e:55:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:50.731172 containerd[1470]: 2025-01-30 14:04:50.722 [INFO][3856] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959" Namespace="calico-system" Pod="csi-node-driver-92kn2" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:04:50.767573 containerd[1470]: time="2025-01-30T14:04:50.767438005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:50.767756 containerd[1470]: time="2025-01-30T14:04:50.767609430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:50.767756 containerd[1470]: time="2025-01-30T14:04:50.767690315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:50.768401 containerd[1470]: time="2025-01-30T14:04:50.767875593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:50.810650 systemd[1]: Started cri-containerd-2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959.scope - libcontainer container 2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959. Jan 30 14:04:50.844137 containerd[1470]: time="2025-01-30T14:04:50.844070335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-92kn2,Uid:5a144ec1-46fe-4595-a551-f8f4cec9f827,Namespace:calico-system,Attempt:1,} returns sandbox id \"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959\"" Jan 30 14:04:50.848347 containerd[1470]: time="2025-01-30T14:04:50.847335472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:04:51.402240 containerd[1470]: time="2025-01-30T14:04:51.401232134Z" level=info msg="StopPodSandbox for \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\"" Jan 30 14:04:51.405336 containerd[1470]: time="2025-01-30T14:04:51.405159777Z" level=info msg="StopPodSandbox for \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\"" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.489 [INFO][3948] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.489 [INFO][3948] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" iface="eth0" netns="/var/run/netns/cni-e89c75bc-18c3-96db-11ff-4e6e92cce00e" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.490 [INFO][3948] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" iface="eth0" netns="/var/run/netns/cni-e89c75bc-18c3-96db-11ff-4e6e92cce00e" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.493 [INFO][3948] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" iface="eth0" netns="/var/run/netns/cni-e89c75bc-18c3-96db-11ff-4e6e92cce00e" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.493 [INFO][3948] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.493 [INFO][3948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.534 [INFO][3964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.534 [INFO][3964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.534 [INFO][3964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.543 [WARNING][3964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.544 [INFO][3964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.545 [INFO][3964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:51.550191 containerd[1470]: 2025-01-30 14:04:51.548 [INFO][3948] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:04:51.551474 containerd[1470]: time="2025-01-30T14:04:51.550495227Z" level=info msg="TearDown network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\" successfully" Jan 30 14:04:51.551474 containerd[1470]: time="2025-01-30T14:04:51.550534275Z" level=info msg="StopPodSandbox for \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\" returns successfully" Jan 30 14:04:51.559680 systemd[1]: run-netns-cni\x2de89c75bc\x2d18c3\x2d96db\x2d11ff\x2d4e6e92cce00e.mount: Deactivated successfully. Jan 30 14:04:51.563293 containerd[1470]: time="2025-01-30T14:04:51.560755704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-hv869,Uid:768db4e0-04b4-4bca-96da-7fc689135d38,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.486 [INFO][3955] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.486 [INFO][3955] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" iface="eth0" netns="/var/run/netns/cni-3df421a5-9c9e-5e27-1b3e-b0ee7201a714" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.486 [INFO][3955] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" iface="eth0" netns="/var/run/netns/cni-3df421a5-9c9e-5e27-1b3e-b0ee7201a714" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.489 [INFO][3955] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" iface="eth0" netns="/var/run/netns/cni-3df421a5-9c9e-5e27-1b3e-b0ee7201a714" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.492 [INFO][3955] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.492 [INFO][3955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.538 [INFO][3963] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.539 [INFO][3963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.545 [INFO][3963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.559 [WARNING][3963] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.559 [INFO][3963] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.564 [INFO][3963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:51.568529 containerd[1470]: 2025-01-30 14:04:51.566 [INFO][3955] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:04:51.571641 containerd[1470]: time="2025-01-30T14:04:51.571473818Z" level=info msg="TearDown network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\" successfully" Jan 30 14:04:51.571641 containerd[1470]: time="2025-01-30T14:04:51.571518257Z" level=info msg="StopPodSandbox for \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\" returns successfully" Jan 30 14:04:51.573922 systemd[1]: run-netns-cni\x2d3df421a5\x2d9c9e\x2d5e27\x2d1b3e\x2db0ee7201a714.mount: Deactivated successfully. Jan 30 14:04:51.576108 containerd[1470]: time="2025-01-30T14:04:51.575031493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gp6sd,Uid:7ec8df9f-3ea0-4291-8547-137f7df6ece5,Namespace:kube-system,Attempt:1,}" Jan 30 14:04:51.844798 systemd-networkd[1371]: cali6077eba62eb: Link UP Jan 30 14:04:51.845910 systemd-networkd[1371]: cali6077eba62eb: Gained carrier Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.690 [INFO][3975] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0 calico-apiserver-7dc6458667- calico-apiserver 768db4e0-04b4-4bca-96da-7fc689135d38 777 0 2025-01-30 14:04:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc6458667 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal calico-apiserver-7dc6458667-hv869 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6077eba62eb [] []}} ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.690 [INFO][3975] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.753 [INFO][3997] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" HandleID="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.771 [INFO][3997] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" HandleID="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004daba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", "pod":"calico-apiserver-7dc6458667-hv869", "timestamp":"2025-01-30 14:04:51.753049582 +0000 UTC"}, Hostname:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.772 [INFO][3997] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.772 [INFO][3997] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.772 [INFO][3997] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.775 [INFO][3997] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.785 [INFO][3997] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.808 [INFO][3997] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.811 [INFO][3997] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.815 [INFO][3997] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.815 [INFO][3997] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.818 [INFO][3997] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.828 [INFO][3997] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.836 [INFO][3997] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.66/26] block=192.168.67.64/26 handle="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.836 [INFO][3997] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.66/26] handle="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.836 [INFO][3997] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:51.871215 containerd[1470]: 2025-01-30 14:04:51.836 [INFO][3997] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.66/26] IPv6=[] ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" HandleID="k8s-pod-network.21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.873629 containerd[1470]: 2025-01-30 14:04:51.838 [INFO][3975] cni-plugin/k8s.go 386: Populated endpoint ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"768db4e0-04b4-4bca-96da-7fc689135d38", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7dc6458667-hv869", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6077eba62eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:51.873629 containerd[1470]: 2025-01-30 14:04:51.838 [INFO][3975] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.66/32] ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.873629 containerd[1470]: 2025-01-30 14:04:51.838 [INFO][3975] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6077eba62eb ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.873629 containerd[1470]: 2025-01-30 14:04:51.843 [INFO][3975] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.873629 containerd[1470]: 2025-01-30 14:04:51.844 [INFO][3975] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"768db4e0-04b4-4bca-96da-7fc689135d38", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef", Pod:"calico-apiserver-7dc6458667-hv869", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6077eba62eb", MAC:"fe:98:21:3a:ba:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:51.873629 containerd[1470]: 2025-01-30 14:04:51.867 [INFO][3975] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-hv869" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:04:51.920451 containerd[1470]: time="2025-01-30T14:04:51.920293169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:51.920659 containerd[1470]: time="2025-01-30T14:04:51.920489978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:51.920659 containerd[1470]: time="2025-01-30T14:04:51.920542829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:51.920797 containerd[1470]: time="2025-01-30T14:04:51.920714496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:51.976165 systemd-networkd[1371]: cali7352c7359d4: Link UP Jan 30 14:04:51.976818 systemd-networkd[1371]: cali7352c7359d4: Gained carrier Jan 30 14:04:51.977854 systemd[1]: Started cri-containerd-21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef.scope - libcontainer container 21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef. Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.712 [INFO][3976] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0 coredns-6f6b679f8f- kube-system 7ec8df9f-3ea0-4291-8547-137f7df6ece5 776 0 2025-01-30 14:04:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal coredns-6f6b679f8f-gp6sd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7352c7359d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.713 [INFO][3976] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.785 [INFO][4001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" HandleID="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.811 [INFO][4001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" HandleID="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-gp6sd", "timestamp":"2025-01-30 14:04:51.785256765 +0000 UTC"}, Hostname:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.811 [INFO][4001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.836 [INFO][4001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.836 [INFO][4001] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.879 [INFO][4001] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.889 [INFO][4001] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.904 [INFO][4001] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.908 [INFO][4001] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.912 [INFO][4001] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.913 [INFO][4001] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.915 [INFO][4001] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.933 [INFO][4001] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.949 [INFO][4001] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.67/26] block=192.168.67.64/26 handle="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.949 [INFO][4001] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.67/26] handle="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.949 [INFO][4001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:52.019612 containerd[1470]: 2025-01-30 14:04:51.949 [INFO][4001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.67/26] IPv6=[] ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" HandleID="k8s-pod-network.0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:52.022615 containerd[1470]: 2025-01-30 14:04:51.957 [INFO][3976] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7ec8df9f-3ea0-4291-8547-137f7df6ece5", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-gp6sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7352c7359d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:52.022615 containerd[1470]: 2025-01-30 14:04:51.957 [INFO][3976] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.67/32] ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:52.022615 containerd[1470]: 2025-01-30 14:04:51.957 [INFO][3976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7352c7359d4 ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:52.022615 containerd[1470]: 2025-01-30 14:04:51.977 [INFO][3976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:52.022615 containerd[1470]: 2025-01-30 14:04:51.981 [INFO][3976] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7ec8df9f-3ea0-4291-8547-137f7df6ece5", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d", Pod:"coredns-6f6b679f8f-gp6sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7352c7359d4", MAC:"4e:5e:3b:f1:0c:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:52.022615 containerd[1470]: 2025-01-30 14:04:52.007 [INFO][3976] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d" Namespace="kube-system" Pod="coredns-6f6b679f8f-gp6sd" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:04:52.099658 containerd[1470]: time="2025-01-30T14:04:52.099223805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-hv869,Uid:768db4e0-04b4-4bca-96da-7fc689135d38,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef\"" Jan 30 14:04:52.125191 containerd[1470]: time="2025-01-30T14:04:52.123654268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:52.125191 containerd[1470]: time="2025-01-30T14:04:52.123745678Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:52.125191 containerd[1470]: time="2025-01-30T14:04:52.123882125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:52.125191 containerd[1470]: time="2025-01-30T14:04:52.124246753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:52.156957 systemd[1]: Started cri-containerd-0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d.scope - libcontainer container 0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d. Jan 30 14:04:52.242118 containerd[1470]: time="2025-01-30T14:04:52.242066624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gp6sd,Uid:7ec8df9f-3ea0-4291-8547-137f7df6ece5,Namespace:kube-system,Attempt:1,} returns sandbox id \"0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d\"" Jan 30 14:04:52.249623 containerd[1470]: time="2025-01-30T14:04:52.249316750Z" level=info msg="CreateContainer within sandbox \"0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:04:52.271389 systemd-networkd[1371]: calic4b9fcf6e67: Gained IPv6LL Jan 30 14:04:52.272265 containerd[1470]: time="2025-01-30T14:04:52.271630590Z" level=info msg="CreateContainer within sandbox \"0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"faba5f3855e62380df8ecd2295f50c705100977c36a40ca334599517f4198fee\"" Jan 30 14:04:52.274399 containerd[1470]: time="2025-01-30T14:04:52.273709555Z" level=info msg="StartContainer for \"faba5f3855e62380df8ecd2295f50c705100977c36a40ca334599517f4198fee\"" Jan 30 14:04:52.332788 systemd[1]: Started cri-containerd-faba5f3855e62380df8ecd2295f50c705100977c36a40ca334599517f4198fee.scope - libcontainer container faba5f3855e62380df8ecd2295f50c705100977c36a40ca334599517f4198fee. Jan 30 14:04:52.387533 containerd[1470]: time="2025-01-30T14:04:52.386634977Z" level=info msg="StartContainer for \"faba5f3855e62380df8ecd2295f50c705100977c36a40ca334599517f4198fee\" returns successfully" Jan 30 14:04:52.401417 containerd[1470]: time="2025-01-30T14:04:52.401070032Z" level=info msg="StopPodSandbox for \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\"" Jan 30 14:04:52.430401 containerd[1470]: time="2025-01-30T14:04:52.430175159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:52.433080 containerd[1470]: time="2025-01-30T14:04:52.432806485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 14:04:52.438929 containerd[1470]: time="2025-01-30T14:04:52.438545942Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:52.447013 containerd[1470]: time="2025-01-30T14:04:52.446953671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:52.450642 containerd[1470]: time="2025-01-30T14:04:52.450446147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.602954465s" Jan 30 14:04:52.450642 containerd[1470]: time="2025-01-30T14:04:52.450517766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 14:04:52.458390 containerd[1470]: time="2025-01-30T14:04:52.456664002Z" level=info msg="CreateContainer within sandbox \"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:04:52.458604 containerd[1470]: time="2025-01-30T14:04:52.455356749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:04:52.498324 containerd[1470]: time="2025-01-30T14:04:52.497862077Z" level=info msg="CreateContainer within sandbox \"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d712e83f16e566ccdc38a077852d894cd8a142531e7ab35f7c265ee3de000a45\"" Jan 30 14:04:52.502506 containerd[1470]: time="2025-01-30T14:04:52.501257893Z" level=info msg="StartContainer for \"d712e83f16e566ccdc38a077852d894cd8a142531e7ab35f7c265ee3de000a45\"" Jan 30 14:04:52.588607 systemd[1]: Started cri-containerd-d712e83f16e566ccdc38a077852d894cd8a142531e7ab35f7c265ee3de000a45.scope - libcontainer container d712e83f16e566ccdc38a077852d894cd8a142531e7ab35f7c265ee3de000a45. Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.532 [INFO][4176] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.534 [INFO][4176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" iface="eth0" netns="/var/run/netns/cni-ac00eff0-6a0a-ef8d-72aa-a1a281256b1b" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.535 [INFO][4176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" iface="eth0" netns="/var/run/netns/cni-ac00eff0-6a0a-ef8d-72aa-a1a281256b1b" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.537 [INFO][4176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" iface="eth0" netns="/var/run/netns/cni-ac00eff0-6a0a-ef8d-72aa-a1a281256b1b" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.537 [INFO][4176] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.538 [INFO][4176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.605 [INFO][4197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.605 [INFO][4197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.605 [INFO][4197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.616 [WARNING][4197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.616 [INFO][4197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.619 [INFO][4197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:52.628283 containerd[1470]: 2025-01-30 14:04:52.621 [INFO][4176] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:04:52.633622 containerd[1470]: time="2025-01-30T14:04:52.633569742Z" level=info msg="TearDown network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\" successfully" Jan 30 14:04:52.633848 containerd[1470]: time="2025-01-30T14:04:52.633817994Z" level=info msg="StopPodSandbox for \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\" returns successfully" Jan 30 14:04:52.642622 containerd[1470]: time="2025-01-30T14:04:52.639983497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vncj6,Uid:f66963dd-f3cb-428f-babc-4d8723f64706,Namespace:kube-system,Attempt:1,}" Jan 30 14:04:52.641386 systemd[1]: run-netns-cni\x2dac00eff0\x2d6a0a\x2def8d\x2d72aa\x2da1a281256b1b.mount: Deactivated successfully. Jan 30 14:04:52.650086 containerd[1470]: time="2025-01-30T14:04:52.650006945Z" level=info msg="StartContainer for \"d712e83f16e566ccdc38a077852d894cd8a142531e7ab35f7c265ee3de000a45\" returns successfully" Jan 30 14:04:52.728047 kubelet[2530]: I0130 14:04:52.727957 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gp6sd" podStartSLOduration=31.7279319 podStartE2EDuration="31.7279319s" podCreationTimestamp="2025-01-30 14:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:52.727698918 +0000 UTC m=+37.457579618" watchObservedRunningTime="2025-01-30 14:04:52.7279319 +0000 UTC m=+37.457812601" Jan 30 14:04:52.885628 systemd-networkd[1371]: cali5bbe6f77ce8: Link UP Jan 30 14:04:52.887882 systemd-networkd[1371]: cali5bbe6f77ce8: Gained carrier Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.759 [INFO][4224] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0 coredns-6f6b679f8f- kube-system f66963dd-f3cb-428f-babc-4d8723f64706 792 0 2025-01-30 14:04:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal coredns-6f6b679f8f-vncj6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5bbe6f77ce8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.759 [INFO][4224] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.823 [INFO][4238] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" HandleID="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.839 [INFO][4238] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" HandleID="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318980), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-vncj6", "timestamp":"2025-01-30 14:04:52.823770662 +0000 UTC"}, Hostname:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.839 [INFO][4238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.839 [INFO][4238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.839 [INFO][4238] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.842 [INFO][4238] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.847 [INFO][4238] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.853 [INFO][4238] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.855 [INFO][4238] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.858 [INFO][4238] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.859 [INFO][4238] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.860 [INFO][4238] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2 Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.866 [INFO][4238] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.875 [INFO][4238] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.68/26] block=192.168.67.64/26 handle="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.875 [INFO][4238] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.68/26] handle="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.876 [INFO][4238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:52.915561 containerd[1470]: 2025-01-30 14:04:52.876 [INFO][4238] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.68/26] IPv6=[] ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" HandleID="k8s-pod-network.d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.918525 containerd[1470]: 2025-01-30 14:04:52.878 [INFO][4224] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f66963dd-f3cb-428f-babc-4d8723f64706", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-vncj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5bbe6f77ce8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:52.918525 containerd[1470]: 2025-01-30 14:04:52.878 [INFO][4224] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.68/32] ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.918525 containerd[1470]: 2025-01-30 14:04:52.878 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bbe6f77ce8 ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.918525 containerd[1470]: 2025-01-30 14:04:52.889 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.918525 containerd[1470]: 2025-01-30 14:04:52.890 [INFO][4224] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f66963dd-f3cb-428f-babc-4d8723f64706", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2", Pod:"coredns-6f6b679f8f-vncj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5bbe6f77ce8", MAC:"12:34:1b:08:bc:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:52.918525 containerd[1470]: 2025-01-30 14:04:52.910 [INFO][4224] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2" Namespace="kube-system" Pod="coredns-6f6b679f8f-vncj6" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:04:52.958356 containerd[1470]: time="2025-01-30T14:04:52.958195202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:52.958356 containerd[1470]: time="2025-01-30T14:04:52.958306450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:52.958801 containerd[1470]: time="2025-01-30T14:04:52.958331822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:52.959820 containerd[1470]: time="2025-01-30T14:04:52.959701305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:52.986882 systemd[1]: Started cri-containerd-d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2.scope - libcontainer container d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2. Jan 30 14:04:53.045511 containerd[1470]: time="2025-01-30T14:04:53.044954721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-vncj6,Uid:f66963dd-f3cb-428f-babc-4d8723f64706,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2\"" Jan 30 14:04:53.049550 containerd[1470]: time="2025-01-30T14:04:53.049514374Z" level=info msg="CreateContainer within sandbox \"d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:04:53.074114 containerd[1470]: time="2025-01-30T14:04:53.074071602Z" level=info msg="CreateContainer within sandbox \"d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26b253d579996c8d10f3df66c8ef42a468a4afc1680908f79b748bf7aa4f4228\"" Jan 30 14:04:53.076296 containerd[1470]: time="2025-01-30T14:04:53.075073195Z" level=info msg="StartContainer for \"26b253d579996c8d10f3df66c8ef42a468a4afc1680908f79b748bf7aa4f4228\"" Jan 30 14:04:53.112209 systemd[1]: Started cri-containerd-26b253d579996c8d10f3df66c8ef42a468a4afc1680908f79b748bf7aa4f4228.scope - libcontainer container 26b253d579996c8d10f3df66c8ef42a468a4afc1680908f79b748bf7aa4f4228. Jan 30 14:04:53.155979 containerd[1470]: time="2025-01-30T14:04:53.155805405Z" level=info msg="StartContainer for \"26b253d579996c8d10f3df66c8ef42a468a4afc1680908f79b748bf7aa4f4228\" returns successfully" Jan 30 14:04:53.358567 systemd-networkd[1371]: cali6077eba62eb: Gained IPv6LL Jan 30 14:04:53.402739 containerd[1470]: time="2025-01-30T14:04:53.402689488Z" level=info msg="StopPodSandbox for \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\"" Jan 30 14:04:53.403488 containerd[1470]: time="2025-01-30T14:04:53.403192170Z" level=info msg="StopPodSandbox for \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\"" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.498 [INFO][4366] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.499 [INFO][4366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" iface="eth0" netns="/var/run/netns/cni-e96af3cd-41dc-40de-608b-311037e222fd" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.500 [INFO][4366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" iface="eth0" netns="/var/run/netns/cni-e96af3cd-41dc-40de-608b-311037e222fd" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.502 [INFO][4366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" iface="eth0" netns="/var/run/netns/cni-e96af3cd-41dc-40de-608b-311037e222fd" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.502 [INFO][4366] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.502 [INFO][4366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.594 [INFO][4378] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.594 [INFO][4378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.594 [INFO][4378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.618 [WARNING][4378] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.618 [INFO][4378] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.620 [INFO][4378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:53.629498 containerd[1470]: 2025-01-30 14:04:53.622 [INFO][4366] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:04:53.633221 containerd[1470]: time="2025-01-30T14:04:53.631650393Z" level=info msg="TearDown network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\" successfully" Jan 30 14:04:53.633221 containerd[1470]: time="2025-01-30T14:04:53.631691924Z" level=info msg="StopPodSandbox for \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\" returns successfully" Jan 30 14:04:53.640085 containerd[1470]: time="2025-01-30T14:04:53.638228320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-6bqgk,Uid:7023258d-44a7-4f54-855a-e497a3b14836,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:04:53.641657 systemd[1]: run-netns-cni\x2de96af3cd\x2d41dc\x2d40de\x2d608b\x2d311037e222fd.mount: Deactivated successfully. Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.506 [INFO][4365] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.507 [INFO][4365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" iface="eth0" netns="/var/run/netns/cni-a1d3188d-9471-98f4-d64b-af10bcca867e" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.508 [INFO][4365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" iface="eth0" netns="/var/run/netns/cni-a1d3188d-9471-98f4-d64b-af10bcca867e" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.510 [INFO][4365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" iface="eth0" netns="/var/run/netns/cni-a1d3188d-9471-98f4-d64b-af10bcca867e" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.510 [INFO][4365] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.511 [INFO][4365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.610 [INFO][4382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.611 [INFO][4382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.622 [INFO][4382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.641 [WARNING][4382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.642 [INFO][4382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.646 [INFO][4382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:53.654284 containerd[1470]: 2025-01-30 14:04:53.649 [INFO][4365] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:04:53.659604 containerd[1470]: time="2025-01-30T14:04:53.656631026Z" level=info msg="TearDown network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\" successfully" Jan 30 14:04:53.659604 containerd[1470]: time="2025-01-30T14:04:53.657251545Z" level=info msg="StopPodSandbox for \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\" returns successfully" Jan 30 14:04:53.665091 containerd[1470]: time="2025-01-30T14:04:53.664242789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f8fb56f-czn89,Uid:ed528f1e-84ec-4c23-bd0c-158afa9a4b29,Namespace:calico-system,Attempt:1,}" Jan 30 14:04:53.668080 systemd[1]: run-netns-cni\x2da1d3188d\x2d9471\x2d98f4\x2dd64b\x2daf10bcca867e.mount: Deactivated successfully. Jan 30 14:04:53.938726 systemd-networkd[1371]: cali5bbe6f77ce8: Gained IPv6LL Jan 30 14:04:53.942352 systemd-networkd[1371]: cali7352c7359d4: Gained IPv6LL Jan 30 14:04:54.079010 systemd-networkd[1371]: cali6df37299b6c: Link UP Jan 30 14:04:54.083296 systemd-networkd[1371]: cali6df37299b6c: Gained carrier Jan 30 14:04:54.107895 kubelet[2530]: I0130 14:04:54.106878 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-vncj6" podStartSLOduration=33.106838412 podStartE2EDuration="33.106838412s" podCreationTimestamp="2025-01-30 14:04:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:53.760873149 +0000 UTC m=+38.490753849" watchObservedRunningTime="2025-01-30 14:04:54.106838412 +0000 UTC m=+38.836719108" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.821 [INFO][4392] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0 calico-apiserver-7dc6458667- calico-apiserver 7023258d-44a7-4f54-855a-e497a3b14836 813 0 2025-01-30 14:04:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7dc6458667 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal calico-apiserver-7dc6458667-6bqgk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6df37299b6c [] []}} ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.821 [INFO][4392] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.916 [INFO][4417] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" HandleID="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.934 [INFO][4417] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" HandleID="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051a40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", "pod":"calico-apiserver-7dc6458667-6bqgk", "timestamp":"2025-01-30 14:04:53.916439644 +0000 UTC"}, Hostname:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.934 [INFO][4417] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.934 [INFO][4417] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.934 [INFO][4417] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:53.937 [INFO][4417] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.031 [INFO][4417] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.040 [INFO][4417] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.043 [INFO][4417] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.047 [INFO][4417] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.047 [INFO][4417] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.050 [INFO][4417] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.055 [INFO][4417] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.066 [INFO][4417] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.69/26] block=192.168.67.64/26 handle="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.066 [INFO][4417] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.69/26] handle="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.066 [INFO][4417] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:54.110233 containerd[1470]: 2025-01-30 14:04:54.066 [INFO][4417] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.69/26] IPv6=[] ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" HandleID="k8s-pod-network.3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:54.112468 containerd[1470]: 2025-01-30 14:04:54.072 [INFO][4392] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"7023258d-44a7-4f54-855a-e497a3b14836", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7dc6458667-6bqgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6df37299b6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:54.112468 containerd[1470]: 2025-01-30 14:04:54.072 [INFO][4392] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.69/32] ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:54.112468 containerd[1470]: 2025-01-30 14:04:54.072 [INFO][4392] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6df37299b6c ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:54.112468 containerd[1470]: 2025-01-30 14:04:54.083 [INFO][4392] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:54.112468 containerd[1470]: 2025-01-30 14:04:54.084 [INFO][4392] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"7023258d-44a7-4f54-855a-e497a3b14836", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d", Pod:"calico-apiserver-7dc6458667-6bqgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6df37299b6c", MAC:"d6:4a:69:6d:79:f8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:54.112468 containerd[1470]: 2025-01-30 14:04:54.104 [INFO][4392] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d" Namespace="calico-apiserver" Pod="calico-apiserver-7dc6458667-6bqgk" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:04:54.192849 containerd[1470]: time="2025-01-30T14:04:54.190957949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:54.192849 containerd[1470]: time="2025-01-30T14:04:54.191052842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:54.192849 containerd[1470]: time="2025-01-30T14:04:54.191076941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:54.196118 containerd[1470]: time="2025-01-30T14:04:54.193945696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:54.237130 systemd-networkd[1371]: cali6d3b76d6514: Link UP Jan 30 14:04:54.239252 systemd-networkd[1371]: cali6d3b76d6514: Gained carrier Jan 30 14:04:54.239763 systemd[1]: Started cri-containerd-3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d.scope - libcontainer container 3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d. Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:53.869 [INFO][4401] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0 calico-kube-controllers-764f8fb56f- calico-system ed528f1e-84ec-4c23-bd0c-158afa9a4b29 814 0 2025-01-30 14:04:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:764f8fb56f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal calico-kube-controllers-764f8fb56f-czn89 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6d3b76d6514 [] []}} ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:53.869 [INFO][4401] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:53.960 [INFO][4423] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" HandleID="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.034 [INFO][4423] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" HandleID="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edc00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", "pod":"calico-kube-controllers-764f8fb56f-czn89", "timestamp":"2025-01-30 14:04:53.960026891 +0000 UTC"}, Hostname:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.035 [INFO][4423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.068 [INFO][4423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.068 [INFO][4423] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal' Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.073 [INFO][4423] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.138 [INFO][4423] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.150 [INFO][4423] ipam/ipam.go 489: Trying affinity for 192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.159 [INFO][4423] ipam/ipam.go 155: Attempting to load block cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.168 [INFO][4423] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.67.64/26 host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.168 [INFO][4423] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.67.64/26 handle="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.173 [INFO][4423] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9 Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.185 [INFO][4423] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.67.64/26 handle="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.207 [INFO][4423] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.67.70/26] block=192.168.67.64/26 handle="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.208 [INFO][4423] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.67.70/26] handle="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" host="ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal" Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.209 [INFO][4423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:54.286418 containerd[1470]: 2025-01-30 14:04:54.210 [INFO][4423] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.67.70/26] IPv6=[] ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" HandleID="k8s-pod-network.b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:54.290956 containerd[1470]: 2025-01-30 14:04:54.216 [INFO][4401] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0", GenerateName:"calico-kube-controllers-764f8fb56f-", Namespace:"calico-system", SelfLink:"", UID:"ed528f1e-84ec-4c23-bd0c-158afa9a4b29", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f8fb56f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-764f8fb56f-czn89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d3b76d6514", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:54.290956 containerd[1470]: 2025-01-30 14:04:54.216 [INFO][4401] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.67.70/32] ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:54.290956 containerd[1470]: 2025-01-30 14:04:54.217 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d3b76d6514 ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:54.290956 containerd[1470]: 2025-01-30 14:04:54.244 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:54.290956 containerd[1470]: 2025-01-30 14:04:54.245 [INFO][4401] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0", GenerateName:"calico-kube-controllers-764f8fb56f-", Namespace:"calico-system", SelfLink:"", UID:"ed528f1e-84ec-4c23-bd0c-158afa9a4b29", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f8fb56f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9", Pod:"calico-kube-controllers-764f8fb56f-czn89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d3b76d6514", MAC:"a2:24:b5:49:a6:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:54.290956 containerd[1470]: 2025-01-30 14:04:54.280 [INFO][4401] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9" Namespace="calico-system" Pod="calico-kube-controllers-764f8fb56f-czn89" WorkloadEndpoint="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:04:54.370590 containerd[1470]: time="2025-01-30T14:04:54.369571141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:54.370790 containerd[1470]: time="2025-01-30T14:04:54.370664581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:54.371461 containerd[1470]: time="2025-01-30T14:04:54.370951766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:54.371461 containerd[1470]: time="2025-01-30T14:04:54.371280239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:54.430678 systemd[1]: Started cri-containerd-b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9.scope - libcontainer container b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9. Jan 30 14:04:54.450462 containerd[1470]: time="2025-01-30T14:04:54.450202032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7dc6458667-6bqgk,Uid:7023258d-44a7-4f54-855a-e497a3b14836,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d\"" Jan 30 14:04:54.550358 containerd[1470]: time="2025-01-30T14:04:54.550308171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-764f8fb56f-czn89,Uid:ed528f1e-84ec-4c23-bd0c-158afa9a4b29,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9\"" Jan 30 14:04:55.470830 systemd-networkd[1371]: cali6df37299b6c: Gained IPv6LL Jan 30 14:04:55.580408 containerd[1470]: time="2025-01-30T14:04:55.580332434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:55.581899 containerd[1470]: time="2025-01-30T14:04:55.581638992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 14:04:55.584133 containerd[1470]: time="2025-01-30T14:04:55.583097692Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:55.587632 containerd[1470]: time="2025-01-30T14:04:55.587562809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:55.588815 containerd[1470]: time="2025-01-30T14:04:55.588636139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.129898348s" Jan 30 14:04:55.588815 containerd[1470]: time="2025-01-30T14:04:55.588681666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 14:04:55.591777 containerd[1470]: time="2025-01-30T14:04:55.591606696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:04:55.592751 containerd[1470]: time="2025-01-30T14:04:55.592685353Z" level=info msg="CreateContainer within sandbox \"21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:04:55.615978 containerd[1470]: time="2025-01-30T14:04:55.615802019Z" level=info msg="CreateContainer within sandbox \"21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2cc6c11076a508d65ccb093855df61dad3608ff60e5901ba2b38513237e0aa89\"" Jan 30 14:04:55.618071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052356666.mount: Deactivated successfully. Jan 30 14:04:55.621116 containerd[1470]: time="2025-01-30T14:04:55.620995009Z" level=info msg="StartContainer for \"2cc6c11076a508d65ccb093855df61dad3608ff60e5901ba2b38513237e0aa89\"" Jan 30 14:04:55.698652 systemd[1]: Started cri-containerd-2cc6c11076a508d65ccb093855df61dad3608ff60e5901ba2b38513237e0aa89.scope - libcontainer container 2cc6c11076a508d65ccb093855df61dad3608ff60e5901ba2b38513237e0aa89. Jan 30 14:04:55.764448 containerd[1470]: time="2025-01-30T14:04:55.764266364Z" level=info msg="StartContainer for \"2cc6c11076a508d65ccb093855df61dad3608ff60e5901ba2b38513237e0aa89\" returns successfully" Jan 30 14:04:55.790652 systemd-networkd[1371]: cali6d3b76d6514: Gained IPv6LL Jan 30 14:04:56.970817 containerd[1470]: time="2025-01-30T14:04:56.970438336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:56.973312 containerd[1470]: time="2025-01-30T14:04:56.973238936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 14:04:56.975410 containerd[1470]: time="2025-01-30T14:04:56.975128090Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:56.984918 containerd[1470]: time="2025-01-30T14:04:56.984858424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:56.989318 containerd[1470]: time="2025-01-30T14:04:56.988973911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.396962375s" Jan 30 14:04:56.989318 containerd[1470]: time="2025-01-30T14:04:56.989034791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 14:04:56.995285 containerd[1470]: time="2025-01-30T14:04:56.995009915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:04:56.999286 containerd[1470]: time="2025-01-30T14:04:56.999225712Z" level=info msg="CreateContainer within sandbox \"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:04:57.033807 containerd[1470]: time="2025-01-30T14:04:57.033615641Z" level=info msg="CreateContainer within sandbox \"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ad324c8db995bace0774d50fcf03ec2f66eada1122e510cd1488838e043b9a12\"" Jan 30 14:04:57.036436 containerd[1470]: time="2025-01-30T14:04:57.036095626Z" level=info msg="StartContainer for \"ad324c8db995bace0774d50fcf03ec2f66eada1122e510cd1488838e043b9a12\"" Jan 30 14:04:57.091677 systemd[1]: Started cri-containerd-ad324c8db995bace0774d50fcf03ec2f66eada1122e510cd1488838e043b9a12.scope - libcontainer container ad324c8db995bace0774d50fcf03ec2f66eada1122e510cd1488838e043b9a12. Jan 30 14:04:57.135432 containerd[1470]: time="2025-01-30T14:04:57.135144428Z" level=info msg="StartContainer for \"ad324c8db995bace0774d50fcf03ec2f66eada1122e510cd1488838e043b9a12\" returns successfully" Jan 30 14:04:57.228104 containerd[1470]: time="2025-01-30T14:04:57.227943948Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:57.229965 containerd[1470]: time="2025-01-30T14:04:57.229887166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:04:57.232803 containerd[1470]: time="2025-01-30T14:04:57.232757098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 237.69256ms" Jan 30 14:04:57.232938 containerd[1470]: time="2025-01-30T14:04:57.232805615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 14:04:57.234645 containerd[1470]: time="2025-01-30T14:04:57.234595070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:04:57.236155 containerd[1470]: time="2025-01-30T14:04:57.236082659Z" level=info msg="CreateContainer within sandbox \"3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:04:57.261683 containerd[1470]: time="2025-01-30T14:04:57.261626495Z" level=info msg="CreateContainer within sandbox \"3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7bd0148d8d9bbeebe8bda1d12f9370902cf77879bbe9ee2b4c4cd8f0501e9474\"" Jan 30 14:04:57.262613 containerd[1470]: time="2025-01-30T14:04:57.262421188Z" level=info msg="StartContainer for \"7bd0148d8d9bbeebe8bda1d12f9370902cf77879bbe9ee2b4c4cd8f0501e9474\"" Jan 30 14:04:57.305614 systemd[1]: Started cri-containerd-7bd0148d8d9bbeebe8bda1d12f9370902cf77879bbe9ee2b4c4cd8f0501e9474.scope - libcontainer container 7bd0148d8d9bbeebe8bda1d12f9370902cf77879bbe9ee2b4c4cd8f0501e9474. Jan 30 14:04:57.370294 containerd[1470]: time="2025-01-30T14:04:57.369335640Z" level=info msg="StartContainer for \"7bd0148d8d9bbeebe8bda1d12f9370902cf77879bbe9ee2b4c4cd8f0501e9474\" returns successfully" Jan 30 14:04:57.524543 kubelet[2530]: I0130 14:04:57.524393 2530 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:04:57.524543 kubelet[2530]: I0130 14:04:57.524439 2530 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:04:57.770310 kubelet[2530]: I0130 14:04:57.768987 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:04:57.792859 kubelet[2530]: I0130 14:04:57.792660 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-92kn2" podStartSLOduration=23.646686993 podStartE2EDuration="29.792633885s" podCreationTimestamp="2025-01-30 14:04:28 +0000 UTC" firstStartedPulling="2025-01-30 14:04:50.846540582 +0000 UTC m=+35.576421276" lastFinishedPulling="2025-01-30 14:04:56.992487474 +0000 UTC m=+41.722368168" observedRunningTime="2025-01-30 14:04:57.788217028 +0000 UTC m=+42.518097729" watchObservedRunningTime="2025-01-30 14:04:57.792633885 +0000 UTC m=+42.522514587" Jan 30 14:04:57.793158 kubelet[2530]: I0130 14:04:57.793084 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dc6458667-hv869" podStartSLOduration=27.306454077 podStartE2EDuration="30.7930686s" podCreationTimestamp="2025-01-30 14:04:27 +0000 UTC" firstStartedPulling="2025-01-30 14:04:52.103550095 +0000 UTC m=+36.833430780" lastFinishedPulling="2025-01-30 14:04:55.590164615 +0000 UTC m=+40.320045303" observedRunningTime="2025-01-30 14:04:56.77691304 +0000 UTC m=+41.506793740" watchObservedRunningTime="2025-01-30 14:04:57.7930686 +0000 UTC m=+42.522949300" Jan 30 14:04:57.812876 kubelet[2530]: I0130 14:04:57.812789 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7dc6458667-6bqgk" podStartSLOduration=28.032090391 podStartE2EDuration="30.812767317s" podCreationTimestamp="2025-01-30 14:04:27 +0000 UTC" firstStartedPulling="2025-01-30 14:04:54.453105275 +0000 UTC m=+39.182985965" lastFinishedPulling="2025-01-30 14:04:57.233782202 +0000 UTC m=+41.963662891" observedRunningTime="2025-01-30 14:04:57.811277754 +0000 UTC m=+42.541158453" watchObservedRunningTime="2025-01-30 14:04:57.812767317 +0000 UTC m=+42.542648017" Jan 30 14:04:58.345866 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.67.64:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.67.64:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 8 vxlan.calico [fe80::6463:45ff:fe7e:5f6b%4]:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 9 calic4b9fcf6e67 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 10 cali6077eba62eb [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 11 cali7352c7359d4 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 12 cali5bbe6f77ce8 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 13 cali6df37299b6c [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 14:04:58.348401 ntpd[1428]: 30 Jan 14:04:58 ntpd[1428]: Listen normally on 14 cali6d3b76d6514 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 14:04:58.346051 ntpd[1428]: Listen normally on 8 vxlan.calico [fe80::6463:45ff:fe7e:5f6b%4]:123 Jan 30 14:04:58.346141 ntpd[1428]: Listen normally on 9 calic4b9fcf6e67 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 14:04:58.346201 ntpd[1428]: Listen normally on 10 cali6077eba62eb [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 14:04:58.346267 ntpd[1428]: Listen normally on 11 cali7352c7359d4 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 14:04:58.346327 ntpd[1428]: Listen normally on 12 cali5bbe6f77ce8 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 14:04:58.346404 ntpd[1428]: Listen normally on 13 cali6df37299b6c [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 14:04:58.346462 ntpd[1428]: Listen normally on 14 cali6d3b76d6514 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 14:04:58.776189 kubelet[2530]: I0130 14:04:58.775331 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:05:00.130589 containerd[1470]: time="2025-01-30T14:05:00.130493493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:05:00.132037 containerd[1470]: time="2025-01-30T14:05:00.131956860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 14:05:00.133804 containerd[1470]: time="2025-01-30T14:05:00.133711446Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:05:00.138396 containerd[1470]: time="2025-01-30T14:05:00.138217986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:05:00.140059 containerd[1470]: time="2025-01-30T14:05:00.139288147Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.904645704s" Jan 30 14:05:00.140059 containerd[1470]: time="2025-01-30T14:05:00.139339243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 14:05:00.162257 containerd[1470]: time="2025-01-30T14:05:00.162207525Z" level=info msg="CreateContainer within sandbox \"b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:05:00.186133 containerd[1470]: time="2025-01-30T14:05:00.186049606Z" level=info msg="CreateContainer within sandbox \"b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f9e1fb476869f534f2114d1aec19c920f8334cfb685e9e0e408029a2a11928f4\"" Jan 30 14:05:00.186825 containerd[1470]: time="2025-01-30T14:05:00.186791650Z" level=info msg="StartContainer for \"f9e1fb476869f534f2114d1aec19c920f8334cfb685e9e0e408029a2a11928f4\"" Jan 30 14:05:00.227678 systemd[1]: Started cri-containerd-f9e1fb476869f534f2114d1aec19c920f8334cfb685e9e0e408029a2a11928f4.scope - libcontainer container f9e1fb476869f534f2114d1aec19c920f8334cfb685e9e0e408029a2a11928f4. Jan 30 14:05:00.298679 containerd[1470]: time="2025-01-30T14:05:00.298068854Z" level=info msg="StartContainer for \"f9e1fb476869f534f2114d1aec19c920f8334cfb685e9e0e408029a2a11928f4\" returns successfully" Jan 30 14:05:00.802644 kubelet[2530]: I0130 14:05:00.802539 2530 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-764f8fb56f-czn89" podStartSLOduration=27.214400189 podStartE2EDuration="32.802290112s" podCreationTimestamp="2025-01-30 14:04:28 +0000 UTC" firstStartedPulling="2025-01-30 14:04:54.552806684 +0000 UTC m=+39.282687373" lastFinishedPulling="2025-01-30 14:05:00.1406966 +0000 UTC m=+44.870577296" observedRunningTime="2025-01-30 14:05:00.80081918 +0000 UTC m=+45.530699894" watchObservedRunningTime="2025-01-30 14:05:00.802290112 +0000 UTC m=+45.532170813" Jan 30 14:05:01.787996 kubelet[2530]: I0130 14:05:01.787871 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:05:02.084812 systemd[1]: run-containerd-runc-k8s.io-f9e1fb476869f534f2114d1aec19c920f8334cfb685e9e0e408029a2a11928f4-runc.9kEF7b.mount: Deactivated successfully. Jan 30 14:05:02.513862 systemd[1]: Started sshd@7-10.128.0.55:22-162.214.12.117:45570.service - OpenSSH per-connection server daemon (162.214.12.117:45570). Jan 30 14:05:02.752943 sshd[4789]: Received disconnect from 162.214.12.117 port 45570:11: Bye Bye [preauth] Jan 30 14:05:02.752943 sshd[4789]: Disconnected from authenticating user root 162.214.12.117 port 45570 [preauth] Jan 30 14:05:02.756109 systemd[1]: sshd@7-10.128.0.55:22-162.214.12.117:45570.service: Deactivated successfully. Jan 30 14:05:09.289928 kubelet[2530]: I0130 14:05:09.289481 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:05:14.577435 systemd[1]: Started sshd@8-10.128.0.55:22-139.178.68.195:40296.service - OpenSSH per-connection server daemon (139.178.68.195:40296). Jan 30 14:05:14.734122 kubelet[2530]: I0130 14:05:14.733187 2530 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:05:14.866509 sshd[4812]: Accepted publickey for core from 139.178.68.195 port 40296 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:14.868802 sshd[4812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:14.876599 systemd-logind[1441]: New session 8 of user core. Jan 30 14:05:14.880609 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:05:15.169132 sshd[4812]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:15.175827 systemd[1]: sshd@8-10.128.0.55:22-139.178.68.195:40296.service: Deactivated successfully. Jan 30 14:05:15.179225 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:05:15.181280 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:05:15.183946 systemd-logind[1441]: Removed session 8. Jan 30 14:05:15.422007 containerd[1470]: time="2025-01-30T14:05:15.421740503Z" level=info msg="StopPodSandbox for \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\"" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.484 [WARNING][4842] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f66963dd-f3cb-428f-babc-4d8723f64706", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2", Pod:"coredns-6f6b679f8f-vncj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5bbe6f77ce8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.485 [INFO][4842] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.485 [INFO][4842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" iface="eth0" netns="" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.485 [INFO][4842] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.485 [INFO][4842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.515 [INFO][4848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.516 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.516 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.530 [WARNING][4848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.530 [INFO][4848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.532 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:15.535775 containerd[1470]: 2025-01-30 14:05:15.534 [INFO][4842] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.536937 containerd[1470]: time="2025-01-30T14:05:15.536742420Z" level=info msg="TearDown network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\" successfully" Jan 30 14:05:15.536937 containerd[1470]: time="2025-01-30T14:05:15.536789532Z" level=info msg="StopPodSandbox for \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\" returns successfully" Jan 30 14:05:15.537824 containerd[1470]: time="2025-01-30T14:05:15.537790502Z" level=info msg="RemovePodSandbox for \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\"" Jan 30 14:05:15.538212 containerd[1470]: time="2025-01-30T14:05:15.537970249Z" level=info msg="Forcibly stopping sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\"" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.600 [WARNING][4867] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f66963dd-f3cb-428f-babc-4d8723f64706", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"d5701177b7a54eac3bcd8eaa868e46f221e89a1362f793a02a833e67ff17ded2", Pod:"coredns-6f6b679f8f-vncj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5bbe6f77ce8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.601 [INFO][4867] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.601 [INFO][4867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" iface="eth0" netns="" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.602 [INFO][4867] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.602 [INFO][4867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.633 [INFO][4873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.633 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.633 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.642 [WARNING][4873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.642 [INFO][4873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" HandleID="k8s-pod-network.9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--vncj6-eth0" Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.644 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:15.646684 containerd[1470]: 2025-01-30 14:05:15.645 [INFO][4867] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238" Jan 30 14:05:15.647721 containerd[1470]: time="2025-01-30T14:05:15.646776179Z" level=info msg="TearDown network for sandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\" successfully" Jan 30 14:05:15.652193 containerd[1470]: time="2025-01-30T14:05:15.652094855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:05:15.652193 containerd[1470]: time="2025-01-30T14:05:15.652190633Z" level=info msg="RemovePodSandbox \"9296b4a5d1d4e382dcedf8ee2e3695846a68eb174a8eb8968b47154d12a1e238\" returns successfully" Jan 30 14:05:15.652976 containerd[1470]: time="2025-01-30T14:05:15.652923421Z" level=info msg="StopPodSandbox for \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\"" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.706 [WARNING][4891] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0", GenerateName:"calico-kube-controllers-764f8fb56f-", Namespace:"calico-system", SelfLink:"", UID:"ed528f1e-84ec-4c23-bd0c-158afa9a4b29", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f8fb56f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9", Pod:"calico-kube-controllers-764f8fb56f-czn89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d3b76d6514", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.706 [INFO][4891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.706 [INFO][4891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" iface="eth0" netns="" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.707 [INFO][4891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.707 [INFO][4891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.736 [INFO][4897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.736 [INFO][4897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.736 [INFO][4897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.745 [WARNING][4897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.745 [INFO][4897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.747 [INFO][4897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:15.749696 containerd[1470]: 2025-01-30 14:05:15.748 [INFO][4891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.750546 containerd[1470]: time="2025-01-30T14:05:15.749767620Z" level=info msg="TearDown network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\" successfully" Jan 30 14:05:15.750546 containerd[1470]: time="2025-01-30T14:05:15.749800165Z" level=info msg="StopPodSandbox for \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\" returns successfully" Jan 30 14:05:15.750546 containerd[1470]: time="2025-01-30T14:05:15.750421664Z" level=info msg="RemovePodSandbox for \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\"" Jan 30 14:05:15.750546 containerd[1470]: time="2025-01-30T14:05:15.750505307Z" level=info msg="Forcibly stopping sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\"" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.797 [WARNING][4915] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0", GenerateName:"calico-kube-controllers-764f8fb56f-", Namespace:"calico-system", SelfLink:"", UID:"ed528f1e-84ec-4c23-bd0c-158afa9a4b29", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"764f8fb56f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"b0b4eb6f3170336b0ef3157aa95bef450f660a2fafdfe7b7604c39f2f7a815b9", Pod:"calico-kube-controllers-764f8fb56f-czn89", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.67.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d3b76d6514", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.797 [INFO][4915] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.797 [INFO][4915] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" iface="eth0" netns="" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.797 [INFO][4915] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.797 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.840 [INFO][4921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.841 [INFO][4921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.841 [INFO][4921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.852 [WARNING][4921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.853 [INFO][4921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" HandleID="k8s-pod-network.ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--kube--controllers--764f8fb56f--czn89-eth0" Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.857 [INFO][4921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:15.861101 containerd[1470]: 2025-01-30 14:05:15.859 [INFO][4915] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1" Jan 30 14:05:15.862127 containerd[1470]: time="2025-01-30T14:05:15.861142330Z" level=info msg="TearDown network for sandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\" successfully" Jan 30 14:05:15.865856 containerd[1470]: time="2025-01-30T14:05:15.865798583Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:05:15.866018 containerd[1470]: time="2025-01-30T14:05:15.865890018Z" level=info msg="RemovePodSandbox \"ec99e9f6ecf02ab64030962e105f630349fb1800498de2c1916ac3426253a0c1\" returns successfully" Jan 30 14:05:15.866844 containerd[1470]: time="2025-01-30T14:05:15.866486849Z" level=info msg="StopPodSandbox for \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\"" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.914 [WARNING][4939] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"7023258d-44a7-4f54-855a-e497a3b14836", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d", Pod:"calico-apiserver-7dc6458667-6bqgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6df37299b6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.914 [INFO][4939] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.914 [INFO][4939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" iface="eth0" netns="" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.914 [INFO][4939] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.914 [INFO][4939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.943 [INFO][4946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.943 [INFO][4946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.943 [INFO][4946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.952 [WARNING][4946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.952 [INFO][4946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.954 [INFO][4946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:15.957350 containerd[1470]: 2025-01-30 14:05:15.955 [INFO][4939] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:15.957350 containerd[1470]: time="2025-01-30T14:05:15.957173441Z" level=info msg="TearDown network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\" successfully" Jan 30 14:05:15.957350 containerd[1470]: time="2025-01-30T14:05:15.957208219Z" level=info msg="StopPodSandbox for \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\" returns successfully" Jan 30 14:05:15.958312 containerd[1470]: time="2025-01-30T14:05:15.957888267Z" level=info msg="RemovePodSandbox for \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\"" Jan 30 14:05:15.958312 containerd[1470]: time="2025-01-30T14:05:15.957934325Z" level=info msg="Forcibly stopping sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\"" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.007 [WARNING][4964] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"7023258d-44a7-4f54-855a-e497a3b14836", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"3952af542088d54efad13ecfc2de85b9b3c0479c45460eaa44d0ba0ec497767d", Pod:"calico-apiserver-7dc6458667-6bqgk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6df37299b6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.008 [INFO][4964] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.008 [INFO][4964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" iface="eth0" netns="" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.008 [INFO][4964] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.008 [INFO][4964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.036 [INFO][4970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.036 [INFO][4970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.036 [INFO][4970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.045 [WARNING][4970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.045 [INFO][4970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" HandleID="k8s-pod-network.976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--6bqgk-eth0" Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.048 [INFO][4970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:16.051617 containerd[1470]: 2025-01-30 14:05:16.050 [INFO][4964] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a" Jan 30 14:05:16.051617 containerd[1470]: time="2025-01-30T14:05:16.051452339Z" level=info msg="TearDown network for sandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\" successfully" Jan 30 14:05:16.057317 containerd[1470]: time="2025-01-30T14:05:16.057220104Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:05:16.057677 containerd[1470]: time="2025-01-30T14:05:16.057328830Z" level=info msg="RemovePodSandbox \"976ea1f213edae9e66833f11f3b747258a27ff9b76650b7494184f67d4e48a3a\" returns successfully" Jan 30 14:05:16.058178 containerd[1470]: time="2025-01-30T14:05:16.057992237Z" level=info msg="StopPodSandbox for \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\"" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.111 [WARNING][4988] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7ec8df9f-3ea0-4291-8547-137f7df6ece5", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d", Pod:"coredns-6f6b679f8f-gp6sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7352c7359d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.112 [INFO][4988] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.112 [INFO][4988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" iface="eth0" netns="" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.112 [INFO][4988] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.112 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.145 [INFO][4994] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.146 [INFO][4994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.146 [INFO][4994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.155 [WARNING][4994] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.155 [INFO][4994] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.157 [INFO][4994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:16.160606 containerd[1470]: 2025-01-30 14:05:16.159 [INFO][4988] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.161440 containerd[1470]: time="2025-01-30T14:05:16.160802876Z" level=info msg="TearDown network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\" successfully" Jan 30 14:05:16.161440 containerd[1470]: time="2025-01-30T14:05:16.160852001Z" level=info msg="StopPodSandbox for \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\" returns successfully" Jan 30 14:05:16.161952 containerd[1470]: time="2025-01-30T14:05:16.161550889Z" level=info msg="RemovePodSandbox for \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\"" Jan 30 14:05:16.161952 containerd[1470]: time="2025-01-30T14:05:16.161612435Z" level=info msg="Forcibly stopping sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\"" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.218 [WARNING][5012] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7ec8df9f-3ea0-4291-8547-137f7df6ece5", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"0290beadc64d04d5d9a59f285ef30c82dfd0d9e4d5f75dc5057f8c9dfd5d207d", Pod:"coredns-6f6b679f8f-gp6sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.67.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7352c7359d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.218 [INFO][5012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.218 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" iface="eth0" netns="" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.218 [INFO][5012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.218 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.247 [INFO][5019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.247 [INFO][5019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.247 [INFO][5019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.256 [WARNING][5019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.257 [INFO][5019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" HandleID="k8s-pod-network.2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--gp6sd-eth0" Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.260 [INFO][5019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:16.262835 containerd[1470]: 2025-01-30 14:05:16.261 [INFO][5012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901" Jan 30 14:05:16.263812 containerd[1470]: time="2025-01-30T14:05:16.262947373Z" level=info msg="TearDown network for sandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\" successfully" Jan 30 14:05:16.268448 containerd[1470]: time="2025-01-30T14:05:16.268354278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:05:16.268767 containerd[1470]: time="2025-01-30T14:05:16.268463455Z" level=info msg="RemovePodSandbox \"2ad057fab047eb40936d6def7e9f930e63c3505a23689d0a377593f86dcf9901\" returns successfully" Jan 30 14:05:16.269119 containerd[1470]: time="2025-01-30T14:05:16.269087959Z" level=info msg="StopPodSandbox for \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\"" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.321 [WARNING][5038] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"768db4e0-04b4-4bca-96da-7fc689135d38", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef", Pod:"calico-apiserver-7dc6458667-hv869", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6077eba62eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.322 [INFO][5038] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.322 [INFO][5038] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" iface="eth0" netns="" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.322 [INFO][5038] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.322 [INFO][5038] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.351 [INFO][5045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.351 [INFO][5045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.351 [INFO][5045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.359 [WARNING][5045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.359 [INFO][5045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.361 [INFO][5045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:16.364996 containerd[1470]: 2025-01-30 14:05:16.363 [INFO][5038] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.364996 containerd[1470]: time="2025-01-30T14:05:16.364950663Z" level=info msg="TearDown network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\" successfully" Jan 30 14:05:16.364996 containerd[1470]: time="2025-01-30T14:05:16.364988219Z" level=info msg="StopPodSandbox for \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\" returns successfully" Jan 30 14:05:16.367562 containerd[1470]: time="2025-01-30T14:05:16.367020634Z" level=info msg="RemovePodSandbox for \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\"" Jan 30 14:05:16.367562 containerd[1470]: time="2025-01-30T14:05:16.367071136Z" level=info msg="Forcibly stopping sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\"" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.420 [WARNING][5063] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0", GenerateName:"calico-apiserver-7dc6458667-", Namespace:"calico-apiserver", SelfLink:"", UID:"768db4e0-04b4-4bca-96da-7fc689135d38", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7dc6458667", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"21f33d07fab533b6505e750186334e0fec9344ed911322f73fcbfca19ff93bef", Pod:"calico-apiserver-7dc6458667-hv869", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.67.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6077eba62eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.421 [INFO][5063] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.421 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" iface="eth0" netns="" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.421 [INFO][5063] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.421 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.447 [INFO][5070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.447 [INFO][5070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.447 [INFO][5070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.462 [WARNING][5070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.462 [INFO][5070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" HandleID="k8s-pod-network.d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-calico--apiserver--7dc6458667--hv869-eth0" Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.465 [INFO][5070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:16.470477 containerd[1470]: 2025-01-30 14:05:16.468 [INFO][5063] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf" Jan 30 14:05:16.473834 containerd[1470]: time="2025-01-30T14:05:16.471788673Z" level=info msg="TearDown network for sandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\" successfully" Jan 30 14:05:16.479808 containerd[1470]: time="2025-01-30T14:05:16.479460262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:05:16.479808 containerd[1470]: time="2025-01-30T14:05:16.479587075Z" level=info msg="RemovePodSandbox \"d43c3bbd12d051e417f84c3e34d273091aedb338744061ee628940860e9f6dbf\" returns successfully" Jan 30 14:05:16.481520 containerd[1470]: time="2025-01-30T14:05:16.481192437Z" level=info msg="StopPodSandbox for \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\"" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.567 [WARNING][5088] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a144ec1-46fe-4595-a551-f8f4cec9f827", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959", Pod:"csi-node-driver-92kn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4b9fcf6e67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.569 [INFO][5088] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.569 [INFO][5088] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" iface="eth0" netns="" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.569 [INFO][5088] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.569 [INFO][5088] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.619 [INFO][5095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.619 [INFO][5095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.620 [INFO][5095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.632 [WARNING][5095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.632 [INFO][5095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.636 [INFO][5095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:16.641390 containerd[1470]: 2025-01-30 14:05:16.638 [INFO][5088] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.644732 containerd[1470]: time="2025-01-30T14:05:16.643438649Z" level=info msg="TearDown network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\" successfully" Jan 30 14:05:16.644732 containerd[1470]: time="2025-01-30T14:05:16.643539783Z" level=info msg="StopPodSandbox for \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\" returns successfully" Jan 30 14:05:16.644732 containerd[1470]: time="2025-01-30T14:05:16.644237502Z" level=info msg="RemovePodSandbox for \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\"" Jan 30 14:05:16.644732 containerd[1470]: time="2025-01-30T14:05:16.644278657Z" level=info msg="Forcibly stopping sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\"" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.723 [WARNING][5113] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5a144ec1-46fe-4595-a551-f8f4cec9f827", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 4, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-6fa95e80bff3fbec2dbb.c.flatcar-212911.internal", ContainerID:"2be25c5d1c3251dafa9d3c4a545886815adc3b6764b7c3289f78b1c969f7a959", Pod:"csi-node-driver-92kn2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.67.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4b9fcf6e67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.724 [INFO][5113] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.724 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" iface="eth0" netns="" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.724 [INFO][5113] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.724 [INFO][5113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.772 [INFO][5119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.772 [INFO][5119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.773 [INFO][5119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.787 [WARNING][5119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.788 [INFO][5119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" HandleID="k8s-pod-network.cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Workload="ci--4081--3--0--6fa95e80bff3fbec2dbb.c.flatcar--212911.internal-k8s-csi--node--driver--92kn2-eth0" Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.791 [INFO][5119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:05:16.797017 containerd[1470]: 2025-01-30 14:05:16.792 [INFO][5113] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d" Jan 30 14:05:16.797017 containerd[1470]: time="2025-01-30T14:05:16.795442701Z" level=info msg="TearDown network for sandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\" successfully" Jan 30 14:05:16.803295 containerd[1470]: time="2025-01-30T14:05:16.803229684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:05:16.803523 containerd[1470]: time="2025-01-30T14:05:16.803351322Z" level=info msg="RemovePodSandbox \"cbc1951be8d52d7c93fb8c2b15145e444fc131fb303c2979640a8fdd0a7bc02d\" returns successfully" Jan 30 14:05:20.226808 systemd[1]: Started sshd@9-10.128.0.55:22-139.178.68.195:43200.service - OpenSSH per-connection server daemon (139.178.68.195:43200). Jan 30 14:05:20.508315 sshd[5126]: Accepted publickey for core from 139.178.68.195 port 43200 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:20.510230 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:20.517143 systemd-logind[1441]: New session 9 of user core. Jan 30 14:05:20.524677 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:05:20.821844 sshd[5126]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:20.828091 systemd[1]: sshd@9-10.128.0.55:22-139.178.68.195:43200.service: Deactivated successfully. Jan 30 14:05:20.831317 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:05:20.832692 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:05:20.834459 systemd-logind[1441]: Removed session 9. Jan 30 14:05:25.882677 systemd[1]: Started sshd@10-10.128.0.55:22-139.178.68.195:47754.service - OpenSSH per-connection server daemon (139.178.68.195:47754). Jan 30 14:05:26.170181 sshd[5144]: Accepted publickey for core from 139.178.68.195 port 47754 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:26.172072 sshd[5144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:26.179505 systemd-logind[1441]: New session 10 of user core. Jan 30 14:05:26.185617 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:05:26.468884 sshd[5144]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:26.474091 systemd[1]: sshd@10-10.128.0.55:22-139.178.68.195:47754.service: Deactivated successfully. Jan 30 14:05:26.477079 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:05:26.479777 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:05:26.482435 systemd-logind[1441]: Removed session 10. Jan 30 14:05:26.526814 systemd[1]: Started sshd@11-10.128.0.55:22-139.178.68.195:47756.service - OpenSSH per-connection server daemon (139.178.68.195:47756). Jan 30 14:05:26.810595 sshd[5158]: Accepted publickey for core from 139.178.68.195 port 47756 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:26.812649 sshd[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:26.819541 systemd-logind[1441]: New session 11 of user core. Jan 30 14:05:26.824622 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:05:27.145549 sshd[5158]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:27.152601 systemd[1]: sshd@11-10.128.0.55:22-139.178.68.195:47756.service: Deactivated successfully. Jan 30 14:05:27.155599 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:05:27.157008 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:05:27.159315 systemd-logind[1441]: Removed session 11. Jan 30 14:05:27.197831 systemd[1]: Started sshd@12-10.128.0.55:22-139.178.68.195:47764.service - OpenSSH per-connection server daemon (139.178.68.195:47764). Jan 30 14:05:27.483422 sshd[5169]: Accepted publickey for core from 139.178.68.195 port 47764 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:27.485416 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:27.492487 systemd-logind[1441]: New session 12 of user core. Jan 30 14:05:27.499693 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:05:27.778322 sshd[5169]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:27.785190 systemd[1]: sshd@12-10.128.0.55:22-139.178.68.195:47764.service: Deactivated successfully. Jan 30 14:05:27.787855 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:05:27.788984 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:05:27.790638 systemd-logind[1441]: Removed session 12. Jan 30 14:05:32.835808 systemd[1]: Started sshd@13-10.128.0.55:22-139.178.68.195:47768.service - OpenSSH per-connection server daemon (139.178.68.195:47768). Jan 30 14:05:33.114448 sshd[5228]: Accepted publickey for core from 139.178.68.195 port 47768 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:33.116490 sshd[5228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:33.123046 systemd-logind[1441]: New session 13 of user core. Jan 30 14:05:33.130926 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:05:33.478749 sshd[5228]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:33.485295 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:05:33.488398 systemd[1]: sshd@13-10.128.0.55:22-139.178.68.195:47768.service: Deactivated successfully. Jan 30 14:05:33.494561 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:05:33.500015 systemd-logind[1441]: Removed session 13. Jan 30 14:05:38.531822 systemd[1]: Started sshd@14-10.128.0.55:22-139.178.68.195:56728.service - OpenSSH per-connection server daemon (139.178.68.195:56728). Jan 30 14:05:38.806823 sshd[5247]: Accepted publickey for core from 139.178.68.195 port 56728 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:38.808907 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:38.814401 systemd-logind[1441]: New session 14 of user core. Jan 30 14:05:38.821642 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:05:39.107595 sshd[5247]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:39.118110 systemd[1]: sshd@14-10.128.0.55:22-139.178.68.195:56728.service: Deactivated successfully. Jan 30 14:05:39.118748 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:05:39.121918 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:05:39.124640 systemd-logind[1441]: Removed session 14. Jan 30 14:05:44.160814 systemd[1]: Started sshd@15-10.128.0.55:22-139.178.68.195:56736.service - OpenSSH per-connection server daemon (139.178.68.195:56736). Jan 30 14:05:44.435109 sshd[5259]: Accepted publickey for core from 139.178.68.195 port 56736 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:44.437293 sshd[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:44.443801 systemd-logind[1441]: New session 15 of user core. Jan 30 14:05:44.448595 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:05:44.728653 sshd[5259]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:44.733847 systemd[1]: sshd@15-10.128.0.55:22-139.178.68.195:56736.service: Deactivated successfully. Jan 30 14:05:44.736695 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:05:44.738576 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:05:44.740215 systemd-logind[1441]: Removed session 15. Jan 30 14:05:49.784821 systemd[1]: Started sshd@16-10.128.0.55:22-139.178.68.195:45514.service - OpenSSH per-connection server daemon (139.178.68.195:45514). Jan 30 14:05:50.069855 sshd[5274]: Accepted publickey for core from 139.178.68.195 port 45514 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:50.072117 sshd[5274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:50.078653 systemd-logind[1441]: New session 16 of user core. Jan 30 14:05:50.086724 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:05:50.364297 sshd[5274]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:50.370586 systemd[1]: sshd@16-10.128.0.55:22-139.178.68.195:45514.service: Deactivated successfully. Jan 30 14:05:50.373819 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:05:50.375476 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:05:50.377271 systemd-logind[1441]: Removed session 16. Jan 30 14:05:50.419824 systemd[1]: Started sshd@17-10.128.0.55:22-139.178.68.195:45522.service - OpenSSH per-connection server daemon (139.178.68.195:45522). Jan 30 14:05:50.706402 sshd[5287]: Accepted publickey for core from 139.178.68.195 port 45522 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:50.708326 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:50.714443 systemd-logind[1441]: New session 17 of user core. Jan 30 14:05:50.721625 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:05:51.084618 sshd[5287]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:51.090126 systemd[1]: sshd@17-10.128.0.55:22-139.178.68.195:45522.service: Deactivated successfully. Jan 30 14:05:51.093093 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:05:51.095732 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:05:51.098080 systemd-logind[1441]: Removed session 17. Jan 30 14:05:51.140455 systemd[1]: Started sshd@18-10.128.0.55:22-139.178.68.195:45532.service - OpenSSH per-connection server daemon (139.178.68.195:45532). Jan 30 14:05:51.418959 sshd[5298]: Accepted publickey for core from 139.178.68.195 port 45532 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:51.420650 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:51.427685 systemd-logind[1441]: New session 18 of user core. Jan 30 14:05:51.435674 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:05:54.045810 sshd[5298]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:54.053541 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:05:54.054914 systemd[1]: sshd@18-10.128.0.55:22-139.178.68.195:45532.service: Deactivated successfully. Jan 30 14:05:54.061909 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:05:54.068001 systemd-logind[1441]: Removed session 18. Jan 30 14:05:54.105798 systemd[1]: Started sshd@19-10.128.0.55:22-139.178.68.195:45538.service - OpenSSH per-connection server daemon (139.178.68.195:45538). Jan 30 14:05:54.389683 sshd[5318]: Accepted publickey for core from 139.178.68.195 port 45538 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:54.391968 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:54.400274 systemd-logind[1441]: New session 19 of user core. Jan 30 14:05:54.406657 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:05:54.841262 sshd[5318]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:54.845948 systemd[1]: sshd@19-10.128.0.55:22-139.178.68.195:45538.service: Deactivated successfully. Jan 30 14:05:54.850211 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:05:54.852403 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:05:54.854241 systemd-logind[1441]: Removed session 19. Jan 30 14:05:54.899138 systemd[1]: Started sshd@20-10.128.0.55:22-139.178.68.195:48274.service - OpenSSH per-connection server daemon (139.178.68.195:48274). Jan 30 14:05:55.184557 sshd[5329]: Accepted publickey for core from 139.178.68.195 port 48274 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:05:55.186494 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:55.193675 systemd-logind[1441]: New session 20 of user core. Jan 30 14:05:55.200626 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:05:55.472852 sshd[5329]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:55.477925 systemd[1]: sshd@20-10.128.0.55:22-139.178.68.195:48274.service: Deactivated successfully. Jan 30 14:05:55.481078 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:05:55.483278 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:05:55.485049 systemd-logind[1441]: Removed session 20. Jan 30 14:05:59.497830 systemd[1]: run-containerd-runc-k8s.io-25b273f6838f62e4137eb8e7015d429023fa252c2888906bc10a8f8bc297095b-runc.lQ7cEO.mount: Deactivated successfully. Jan 30 14:05:59.774819 systemd[1]: Started sshd@21-10.128.0.55:22-115.127.82.114:41748.service - OpenSSH per-connection server daemon (115.127.82.114:41748). Jan 30 14:06:00.533259 systemd[1]: Started sshd@22-10.128.0.55:22-139.178.68.195:48276.service - OpenSSH per-connection server daemon (139.178.68.195:48276). Jan 30 14:06:00.833061 sshd[5366]: Accepted publickey for core from 139.178.68.195 port 48276 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:06:00.834939 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:00.841851 systemd-logind[1441]: New session 21 of user core. Jan 30 14:06:00.848630 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:06:01.124013 sshd[5366]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:01.130775 systemd[1]: sshd@22-10.128.0.55:22-139.178.68.195:48276.service: Deactivated successfully. Jan 30 14:06:01.133333 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:06:01.135464 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:06:01.137187 systemd-logind[1441]: Removed session 21. Jan 30 14:06:06.181600 systemd[1]: Started sshd@23-10.128.0.55:22-139.178.68.195:54016.service - OpenSSH per-connection server daemon (139.178.68.195:54016). Jan 30 14:06:06.475926 sshd[5401]: Accepted publickey for core from 139.178.68.195 port 54016 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:06:06.477841 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:06.484925 systemd-logind[1441]: New session 22 of user core. Jan 30 14:06:06.491627 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:06:06.794864 sshd[5401]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:06.799950 systemd[1]: sshd@23-10.128.0.55:22-139.178.68.195:54016.service: Deactivated successfully. Jan 30 14:06:06.803015 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:06:06.805432 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:06:06.807644 systemd-logind[1441]: Removed session 22. Jan 30 14:06:11.850855 systemd[1]: Started sshd@24-10.128.0.55:22-139.178.68.195:54032.service - OpenSSH per-connection server daemon (139.178.68.195:54032). Jan 30 14:06:12.128182 sshd[5438]: Accepted publickey for core from 139.178.68.195 port 54032 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:06:12.130528 sshd[5438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:12.137048 systemd-logind[1441]: New session 23 of user core. Jan 30 14:06:12.145633 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:06:12.412874 sshd[5438]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:12.419079 systemd[1]: sshd@24-10.128.0.55:22-139.178.68.195:54032.service: Deactivated successfully. Jan 30 14:06:12.421961 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:06:12.423096 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:06:12.424642 systemd-logind[1441]: Removed session 23. Jan 30 14:06:17.475852 systemd[1]: Started sshd@25-10.128.0.55:22-139.178.68.195:39446.service - OpenSSH per-connection server daemon (139.178.68.195:39446). Jan 30 14:06:17.776541 sshd[5455]: Accepted publickey for core from 139.178.68.195 port 39446 ssh2: RSA SHA256:2oDokTF0Tx8ll4mIngUZhm+57o3SOYFDwEDwY2pTPdQ Jan 30 14:06:17.778602 sshd[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:17.786295 systemd-logind[1441]: New session 24 of user core. Jan 30 14:06:17.789733 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:06:18.064173 sshd[5455]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:18.070146 systemd[1]: sshd@25-10.128.0.55:22-139.178.68.195:39446.service: Deactivated successfully. Jan 30 14:06:18.073239 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:06:18.074570 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:06:18.076154 systemd-logind[1441]: Removed session 24.