Jan 17 12:18:29.089452 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:18:29.089500 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:29.089518 kernel: BIOS-provided physical RAM map: Jan 17 12:18:29.089532 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 12:18:29.089545 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 12:18:29.089558 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 12:18:29.089574 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 12:18:29.089591 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 12:18:29.089605 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 12:18:29.089619 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 12:18:29.089633 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 12:18:29.089647 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 12:18:29.089661 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 12:18:29.089675 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 12:18:29.089696 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 12:18:29.089711 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 12:18:29.089727 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 12:18:29.089742 kernel: NX (Execute Disable) protection: active Jan 17 12:18:29.089757 kernel: APIC: Static calls initialized Jan 17 12:18:29.089773 kernel: efi: EFI v2.7 by EDK II Jan 17 12:18:29.089789 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 17 12:18:29.089804 kernel: SMBIOS 2.4 present. Jan 17 12:18:29.089821 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 17 12:18:29.089836 kernel: Hypervisor detected: KVM Jan 17 12:18:29.089855 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:18:29.089871 kernel: kvm-clock: using sched offset of 12190431958 cycles Jan 17 12:18:29.089889 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:18:29.089905 kernel: tsc: Detected 2299.998 MHz processor Jan 17 12:18:29.089920 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:18:29.089937 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:18:29.089953 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 12:18:29.089969 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 12:18:29.089986 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:18:29.090005 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 12:18:29.090021 kernel: Using GB pages for direct mapping Jan 17 12:18:29.090037 kernel: Secure boot disabled Jan 17 12:18:29.090053 kernel: ACPI: Early table checksum verification disabled Jan 17 12:18:29.090070 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 12:18:29.090086 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 12:18:29.090103 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 12:18:29.090127 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 12:18:29.090147 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 12:18:29.090164 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 17 12:18:29.090182 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 12:18:29.090199 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 12:18:29.090216 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 12:18:29.090234 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 12:18:29.090280 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 12:18:29.090300 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 12:18:29.090317 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 12:18:29.090342 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 12:18:29.090359 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 12:18:29.090377 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 12:18:29.090394 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 12:18:29.090411 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 12:18:29.090429 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 12:18:29.090451 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 12:18:29.090469 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:18:29.090486 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:18:29.090504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:18:29.090521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 12:18:29.090539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 12:18:29.090555 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 12:18:29.090570 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 12:18:29.090587 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 17 12:18:29.090608 kernel: Zone ranges: Jan 17 12:18:29.090625 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:18:29.090643 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:18:29.090658 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:18:29.090676 kernel: Movable zone start for each node Jan 17 12:18:29.090694 kernel: Early memory node ranges Jan 17 12:18:29.090709 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 12:18:29.090727 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 12:18:29.090744 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 12:18:29.090766 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 12:18:29.090783 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:18:29.090800 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 12:18:29.090818 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:18:29.090834 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 12:18:29.090852 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 12:18:29.090868 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 12:18:29.090885 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 12:18:29.090901 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:18:29.090917 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:18:29.090938 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:18:29.090954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:18:29.090971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:18:29.090989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:18:29.091006 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:18:29.091023 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:18:29.091040 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:18:29.091057 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:18:29.091079 kernel: Booting paravirtualized kernel on KVM Jan 17 12:18:29.091095 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:18:29.091112 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:18:29.091129 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:18:29.091146 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:18:29.091163 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:18:29.091180 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:18:29.091197 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:18:29.091216 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:29.091237 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:18:29.091270 kernel: random: crng init done Jan 17 12:18:29.091286 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:18:29.091300 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:18:29.091316 kernel: Fallback order for Node 0: 0 Jan 17 12:18:29.091339 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 12:18:29.091357 kernel: Policy zone: Normal Jan 17 12:18:29.091372 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:18:29.091389 kernel: software IO TLB: area num 2. Jan 17 12:18:29.091412 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 346940K reserved, 0K cma-reserved) Jan 17 12:18:29.091430 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:18:29.091447 kernel: Kernel/User page tables isolation: enabled Jan 17 12:18:29.091465 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:18:29.091483 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:18:29.091500 kernel: Dynamic Preempt: voluntary Jan 17 12:18:29.091518 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:18:29.091538 kernel: rcu: RCU event tracing is enabled. Jan 17 12:18:29.091574 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:18:29.091593 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:18:29.091612 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:18:29.091635 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:18:29.091654 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:18:29.091673 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:18:29.091691 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:18:29.091709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:18:29.091729 kernel: Console: colour dummy device 80x25 Jan 17 12:18:29.091752 kernel: printk: console [ttyS0] enabled Jan 17 12:18:29.091771 kernel: ACPI: Core revision 20230628 Jan 17 12:18:29.091790 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:18:29.091809 kernel: x2apic enabled Jan 17 12:18:29.091827 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:18:29.091846 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 12:18:29.091866 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:18:29.091885 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 12:18:29.091909 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 12:18:29.091928 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 12:18:29.091946 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:18:29.091965 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:18:29.091984 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:18:29.092003 kernel: Spectre V2 : Mitigation: IBRS Jan 17 12:18:29.092022 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:18:29.092042 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:18:29.092060 kernel: RETBleed: Mitigation: IBRS Jan 17 12:18:29.092083 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:18:29.092103 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 12:18:29.092121 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:18:29.092140 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:18:29.092160 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:18:29.092178 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:18:29.092197 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:18:29.092216 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:18:29.092234 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:18:29.092283 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:18:29.092303 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:18:29.092323 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:18:29.092349 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:18:29.092368 kernel: landlock: Up and running. Jan 17 12:18:29.092387 kernel: SELinux: Initializing. Jan 17 12:18:29.092407 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.092426 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.092445 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 12:18:29.092470 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:29.092489 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:29.092508 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:29.092526 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 12:18:29.092545 kernel: signal: max sigframe size: 1776 Jan 17 12:18:29.092564 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:18:29.092585 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:18:29.092604 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:18:29.092623 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:18:29.092645 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:18:29.092663 kernel: .... node #0, CPUs: #1 Jan 17 12:18:29.092682 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:18:29.092701 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:18:29.092719 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:18:29.092737 kernel: smpboot: Max logical packages: 1 Jan 17 12:18:29.092756 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 12:18:29.092775 kernel: devtmpfs: initialized Jan 17 12:18:29.092798 kernel: x86/mm: Memory block size: 128MB Jan 17 12:18:29.092816 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 12:18:29.092835 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:18:29.092853 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:18:29.092871 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:18:29.092889 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:18:29.092907 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:18:29.092926 kernel: audit: type=2000 audit(1737116308.097:1): state=initialized audit_enabled=0 res=1 Jan 17 12:18:29.092945 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:18:29.092970 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:18:29.093010 kernel: cpuidle: using governor menu Jan 17 12:18:29.093029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:18:29.093048 kernel: dca service started, version 1.12.1 Jan 17 12:18:29.093067 kernel: PCI: Using configuration type 1 for base access Jan 17 12:18:29.093086 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:18:29.093105 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:18:29.093124 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:18:29.093143 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:18:29.093167 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:18:29.093187 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:18:29.093206 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:18:29.093226 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:18:29.093245 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:18:29.093280 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:18:29.093298 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:18:29.093316 kernel: ACPI: Interpreter enabled Jan 17 12:18:29.093339 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:18:29.093362 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:18:29.093382 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:18:29.093400 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:18:29.093417 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:18:29.093436 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:18:29.093703 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:18:29.093917 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:18:29.094109 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:18:29.094139 kernel: PCI host bridge to bus 0000:00 Jan 17 12:18:29.094357 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:18:29.094529 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:18:29.094694 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:18:29.094857 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 12:18:29.095020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:18:29.095226 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:18:29.095469 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 12:18:29.095667 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:18:29.095852 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:18:29.096040 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 12:18:29.096221 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 12:18:29.096475 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 12:18:29.096663 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:18:29.097134 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 12:18:29.097770 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 12:18:29.098080 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:18:29.098304 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 12:18:29.098501 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 12:18:29.098533 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:18:29.098553 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:18:29.098573 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:18:29.098592 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:18:29.098612 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:18:29.098631 kernel: iommu: Default domain type: Translated Jan 17 12:18:29.098649 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:18:29.098667 kernel: efivars: Registered efivars operations Jan 17 12:18:29.098687 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:18:29.098710 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:18:29.098729 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 12:18:29.098749 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 12:18:29.098768 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 12:18:29.098787 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 12:18:29.098806 kernel: vgaarb: loaded Jan 17 12:18:29.098826 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:18:29.098845 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:18:29.098865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:18:29.098889 kernel: pnp: PnP ACPI init Jan 17 12:18:29.098907 kernel: pnp: PnP ACPI: found 7 devices Jan 17 12:18:29.098928 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:18:29.098947 kernel: NET: Registered PF_INET protocol family Jan 17 12:18:29.098967 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:18:29.098987 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:18:29.099007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:18:29.099026 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:18:29.099046 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:18:29.099069 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:18:29.099089 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.099108 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.099128 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:18:29.099147 kernel: NET: Registered PF_XDP protocol family Jan 17 12:18:29.099356 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:18:29.099523 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:18:29.099680 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:18:29.099844 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 12:18:29.100030 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:18:29.100055 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:18:29.100074 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:18:29.100092 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 12:18:29.100112 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:18:29.100131 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:18:29.100150 kernel: clocksource: Switched to clocksource tsc Jan 17 12:18:29.100173 kernel: Initialise system trusted keyrings Jan 17 12:18:29.100192 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:18:29.100210 kernel: Key type asymmetric registered Jan 17 12:18:29.100227 kernel: Asymmetric key parser 'x509' registered Jan 17 12:18:29.100245 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:18:29.100290 kernel: io scheduler mq-deadline registered Jan 17 12:18:29.100309 kernel: io scheduler kyber registered Jan 17 12:18:29.100327 kernel: io scheduler bfq registered Jan 17 12:18:29.100353 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:18:29.100378 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:18:29.100581 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 12:18:29.100605 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 12:18:29.100824 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 12:18:29.100850 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:18:29.101039 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 12:18:29.101063 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:18:29.101082 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:18:29.101100 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:18:29.101124 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 12:18:29.101142 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 12:18:29.103505 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 12:18:29.103541 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:18:29.103559 kernel: i8042: Warning: Keylock active Jan 17 12:18:29.103577 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:18:29.103597 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:18:29.103792 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:18:29.103974 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:18:29.104149 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:18:28 UTC (1737116308) Jan 17 12:18:29.104364 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:18:29.104390 kernel: intel_pstate: CPU model not supported Jan 17 12:18:29.104410 kernel: pstore: Using crash dump compression: deflate Jan 17 12:18:29.104430 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:18:29.104447 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:18:29.104466 kernel: Segment Routing with IPv6 Jan 17 12:18:29.104491 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:18:29.104510 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:18:29.104528 kernel: Key type dns_resolver registered Jan 17 12:18:29.104547 kernel: IPI shorthand broadcast: enabled Jan 17 12:18:29.104566 kernel: sched_clock: Marking stable (852004279, 134709079)->(1026500788, -39787430) Jan 17 12:18:29.104584 kernel: registered taskstats version 1 Jan 17 12:18:29.104603 kernel: Loading compiled-in X.509 certificates Jan 17 12:18:29.104622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:18:29.104640 kernel: Key type .fscrypt registered Jan 17 12:18:29.104661 kernel: Key type fscrypt-provisioning registered Jan 17 12:18:29.104680 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:18:29.104699 kernel: ima: No architecture policies found Jan 17 12:18:29.104719 kernel: clk: Disabling unused clocks Jan 17 12:18:29.104738 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:18:29.104758 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:18:29.104777 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:18:29.104797 kernel: Run /init as init process Jan 17 12:18:29.104815 kernel: with arguments: Jan 17 12:18:29.104837 kernel: /init Jan 17 12:18:29.104857 kernel: with environment: Jan 17 12:18:29.104876 kernel: HOME=/ Jan 17 12:18:29.104895 kernel: TERM=linux Jan 17 12:18:29.104914 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:18:29.104934 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:18:29.104956 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:18:29.104984 systemd[1]: Detected virtualization google. Jan 17 12:18:29.105006 systemd[1]: Detected architecture x86-64. Jan 17 12:18:29.105025 systemd[1]: Running in initrd. Jan 17 12:18:29.105046 systemd[1]: No hostname configured, using default hostname. Jan 17 12:18:29.105065 systemd[1]: Hostname set to . Jan 17 12:18:29.105085 systemd[1]: Initializing machine ID from random generator. Jan 17 12:18:29.105105 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:18:29.105126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:29.105151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:29.105173 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:18:29.105194 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:18:29.105214 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:18:29.105235 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:18:29.105308 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:18:29.105329 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:18:29.105364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:29.105386 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:29.105428 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:18:29.105452 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:18:29.105472 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:18:29.105494 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:18:29.105519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:18:29.105541 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:18:29.105563 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:18:29.105585 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:18:29.105606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:29.105628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:29.105649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:29.105670 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:18:29.105692 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:18:29.105716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:18:29.105737 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:18:29.105759 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:18:29.105781 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:18:29.105803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:18:29.105861 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:18:29.105913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:29.105935 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:18:29.105956 systemd-journald[183]: Journal started Jan 17 12:18:29.105999 systemd-journald[183]: Runtime Journal (/run/log/journal/9a75a13aec0e4b19b8960724b8829322) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:18:29.115382 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:18:29.119137 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:18:29.121897 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:29.123353 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:18:29.135522 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:18:29.140473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:18:29.159103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:29.175409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:18:29.175453 kernel: Bridge firewalling registered Jan 17 12:18:29.163770 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:18:29.177358 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:18:29.179564 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:29.193521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:18:29.201063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:29.202554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:29.211502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:18:29.220376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:29.228722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:29.239674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:29.245672 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:18:29.251142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:18:29.291292 dracut-cmdline[216]: dracut-dracut-053 Jan 17 12:18:29.296167 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:29.323087 systemd-resolved[218]: Positive Trust Anchors: Jan 17 12:18:29.323109 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:18:29.323175 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:18:29.330456 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 12:18:29.332247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:18:29.348522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:29.399310 kernel: SCSI subsystem initialized Jan 17 12:18:29.410306 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:18:29.422288 kernel: iscsi: registered transport (tcp) Jan 17 12:18:29.446314 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:18:29.446399 kernel: QLogic iSCSI HBA Driver Jan 17 12:18:29.498272 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:18:29.505488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:18:29.547356 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:18:29.547445 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:18:29.547474 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:18:29.593321 kernel: raid6: avx2x4 gen() 17944 MB/s Jan 17 12:18:29.610302 kernel: raid6: avx2x2 gen() 18019 MB/s Jan 17 12:18:29.627688 kernel: raid6: avx2x1 gen() 13918 MB/s Jan 17 12:18:29.627731 kernel: raid6: using algorithm avx2x2 gen() 18019 MB/s Jan 17 12:18:29.645729 kernel: raid6: .... xor() 17471 MB/s, rmw enabled Jan 17 12:18:29.645771 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:18:29.669301 kernel: xor: automatically using best checksumming function avx Jan 17 12:18:29.840306 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:18:29.854067 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:18:29.858496 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:29.887397 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 12:18:29.894803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:29.908744 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:18:29.937705 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 12:18:29.975967 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:18:29.985450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:18:30.066918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:30.079508 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:18:30.121106 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:18:30.133575 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:18:30.142409 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:30.146426 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:18:30.156730 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:18:30.194974 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:18:30.198325 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:18:30.228794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:18:30.229310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:30.253713 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:18:30.253799 kernel: AES CTR mode by8 optimization enabled Jan 17 12:18:30.262015 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:30.276306 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:18:30.277537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:18:30.278754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:30.293377 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 12:18:30.293712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:30.306752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:30.348969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:30.357436 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 17 12:18:30.368502 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 12:18:30.368789 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 12:18:30.369044 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 12:18:30.370343 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:18:30.370597 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:18:30.370624 kernel: GPT:17805311 != 25165823 Jan 17 12:18:30.370654 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:18:30.370677 kernel: GPT:17805311 != 25165823 Jan 17 12:18:30.370699 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:18:30.370720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:30.370744 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 12:18:30.371211 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:30.407066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:30.424309 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (460) Jan 17 12:18:30.439286 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (461) Jan 17 12:18:30.442102 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 12:18:30.458794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 12:18:30.459037 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 12:18:30.477341 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 12:18:30.485821 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:18:30.491487 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:18:30.521888 disk-uuid[550]: Primary Header is updated. Jan 17 12:18:30.521888 disk-uuid[550]: Secondary Entries is updated. Jan 17 12:18:30.521888 disk-uuid[550]: Secondary Header is updated. Jan 17 12:18:30.540279 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:30.567312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:30.582298 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:31.584291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:31.584385 disk-uuid[551]: The operation has completed successfully. Jan 17 12:18:31.657975 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:18:31.658124 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:18:31.688487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:18:31.718965 sh[568]: Success Jan 17 12:18:31.741565 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:18:31.826373 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:18:31.833444 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:18:31.863275 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:18:31.900755 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:18:31.900846 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:31.900872 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:18:31.910200 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:18:31.917063 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:18:31.956352 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:18:31.962513 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:18:31.963496 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:18:31.970462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:18:32.031410 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:32.031471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:32.031499 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:18:31.999531 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:18:32.078441 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:18:32.078486 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:18:32.078512 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:32.064918 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:18:32.083569 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:18:32.104520 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:18:32.201608 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:18:32.207609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:18:32.305538 systemd-networkd[751]: lo: Link UP Jan 17 12:18:32.305558 systemd-networkd[751]: lo: Gained carrier Jan 17 12:18:32.308115 systemd-networkd[751]: Enumeration completed Jan 17 12:18:32.315072 ignition[660]: Ignition 2.19.0 Jan 17 12:18:32.308963 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:32.315081 ignition[660]: Stage: fetch-offline Jan 17 12:18:32.308969 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:18:32.315124 ignition[660]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.309384 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:18:32.315134 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.311419 systemd-networkd[751]: eth0: Link UP Jan 17 12:18:32.315292 ignition[660]: parsed url from cmdline: "" Jan 17 12:18:32.311426 systemd-networkd[751]: eth0: Gained carrier Jan 17 12:18:32.315299 ignition[660]: no config URL provided Jan 17 12:18:32.311441 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:32.315309 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:18:32.312714 systemd[1]: Reached target network.target - Network. Jan 17 12:18:32.315324 ignition[660]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:18:32.323341 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:18:32.315335 ignition[660]: failed to fetch config: resource requires networking Jan 17 12:18:32.335703 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:18:32.315655 ignition[660]: Ignition finished successfully Jan 17 12:18:32.360517 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:18:32.405362 ignition[759]: Ignition 2.19.0 Jan 17 12:18:32.415934 unknown[759]: fetched base config from "system" Jan 17 12:18:32.405374 ignition[759]: Stage: fetch Jan 17 12:18:32.415946 unknown[759]: fetched base config from "system" Jan 17 12:18:32.405634 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.415957 unknown[759]: fetched user config from "gcp" Jan 17 12:18:32.405647 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.418546 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:18:32.405780 ignition[759]: parsed url from cmdline: "" Jan 17 12:18:32.425489 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:18:32.405789 ignition[759]: no config URL provided Jan 17 12:18:32.478546 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:18:32.405796 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:18:32.505515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:18:32.405806 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:18:32.551146 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:18:32.405829 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 12:18:32.558770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:18:32.409988 ignition[759]: GET result: OK Jan 17 12:18:32.587534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:18:32.410091 ignition[759]: parsing config with SHA512: 9f6f2460878e032fbee185afcfd4aac9ef9c6fc2b54b5576c1668749761133e4f2431c3ce892b802acae2590e7cdd20bc682d4651e6477b5c10bc1df2dbffcd5 Jan 17 12:18:32.595587 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:18:32.416642 ignition[759]: fetch: fetch complete Jan 17 12:18:32.623550 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:18:32.416657 ignition[759]: fetch: fetch passed Jan 17 12:18:32.629579 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:18:32.416712 ignition[759]: Ignition finished successfully Jan 17 12:18:32.651583 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:18:32.476030 ignition[764]: Ignition 2.19.0 Jan 17 12:18:32.476040 ignition[764]: Stage: kargs Jan 17 12:18:32.476240 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.476274 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.477307 ignition[764]: kargs: kargs passed Jan 17 12:18:32.477364 ignition[764]: Ignition finished successfully Jan 17 12:18:32.548425 ignition[771]: Ignition 2.19.0 Jan 17 12:18:32.548436 ignition[771]: Stage: disks Jan 17 12:18:32.548863 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.548879 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.549876 ignition[771]: disks: disks passed Jan 17 12:18:32.549940 ignition[771]: Ignition finished successfully Jan 17 12:18:32.711468 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:18:32.900403 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:18:32.931434 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:18:33.048700 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:18:33.049649 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:18:33.050556 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:18:33.070547 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:18:33.103400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:18:33.163448 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Jan 17 12:18:33.163500 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:33.163524 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:33.163547 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:18:33.163570 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:18:33.163593 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:18:33.142939 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:18:33.143015 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:18:33.143056 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:18:33.182673 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:18:33.217694 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:18:33.242516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:18:33.372882 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:18:33.384050 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:18:33.394495 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:18:33.404394 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:18:33.545352 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:18:33.550493 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:18:33.587392 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:33.591474 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:18:33.600477 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:18:33.646281 ignition[899]: INFO : Ignition 2.19.0 Jan 17 12:18:33.646281 ignition[899]: INFO : Stage: mount Jan 17 12:18:33.646281 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:33.646281 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:33.649075 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:18:33.717788 ignition[899]: INFO : mount: mount passed Jan 17 12:18:33.717788 ignition[899]: INFO : Ignition finished successfully Jan 17 12:18:33.663805 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:18:33.687440 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:18:33.783152 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (911) Jan 17 12:18:33.783210 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:33.783236 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:33.783272 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:18:33.717571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:18:33.809449 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:18:33.809507 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:18:33.744426 systemd-networkd[751]: eth0: Gained IPv6LL Jan 17 12:18:33.810926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:18:33.854535 ignition[928]: INFO : Ignition 2.19.0 Jan 17 12:18:33.854535 ignition[928]: INFO : Stage: files Jan 17 12:18:33.869435 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:33.869435 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:33.869435 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:18:33.869435 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:18:33.869435 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:18:33.865307 unknown[928]: wrote ssh authorized keys file for user: core Jan 17 12:18:34.006428 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:18:34.175636 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:18:34.175636 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:18:34.466528 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:18:34.806854 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.806854 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:18:34.845421 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:18:34.845421 ignition[928]: INFO : files: files passed Jan 17 12:18:34.845421 ignition[928]: INFO : Ignition finished successfully Jan 17 12:18:34.811481 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:18:34.832595 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:18:34.870598 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:18:34.920833 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:18:35.070577 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:35.070577 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:34.920996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:18:35.136446 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:34.933619 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:18:34.945802 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:18:34.976503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:18:35.046992 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:18:35.047124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:18:35.063335 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:18:35.080471 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:18:35.080672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:18:35.084514 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:18:35.146840 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:18:35.168590 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:18:35.204224 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:35.215687 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:35.234742 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:18:35.245755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:18:35.245957 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:18:35.280787 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:18:35.293736 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:18:35.329725 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:18:35.340743 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:18:35.369698 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:18:35.379862 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:18:35.396789 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:18:35.433746 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:18:35.443772 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:18:35.460788 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:18:35.479753 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:18:35.479978 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:18:35.524759 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:35.535754 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:35.553750 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:18:35.553924 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:35.571722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:18:35.571915 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:18:35.694429 ignition[979]: INFO : Ignition 2.19.0 Jan 17 12:18:35.694429 ignition[979]: INFO : Stage: umount Jan 17 12:18:35.694429 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:35.694429 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:35.694429 ignition[979]: INFO : umount: umount passed Jan 17 12:18:35.694429 ignition[979]: INFO : Ignition finished successfully Jan 17 12:18:35.615750 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:18:35.615979 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:18:35.625813 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:18:35.625994 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:18:35.652657 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:18:35.702512 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:18:35.702763 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:35.716638 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:18:35.752427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:18:35.752730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:35.765711 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:18:35.765904 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:18:35.806338 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:18:35.807482 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:18:35.807597 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:18:35.825162 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:18:35.825301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:18:35.844699 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:18:35.844824 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:18:35.854058 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:18:35.854123 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:18:35.879584 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:18:35.879671 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:18:35.889688 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:18:35.889755 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:18:35.907648 systemd[1]: Stopped target network.target - Network. Jan 17 12:18:35.924587 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:18:35.924686 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:18:35.939681 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:18:35.973425 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:18:35.975341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:35.983622 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:18:36.017553 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:18:36.036574 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:18:36.036656 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:18:36.044662 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:18:36.044725 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:18:36.060665 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:18:36.060740 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:18:36.077659 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:18:36.077733 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:18:36.095666 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:18:36.095741 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:18:36.112894 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:18:36.122355 systemd-networkd[751]: eth0: DHCPv6 lease lost Jan 17 12:18:36.140758 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:18:36.158956 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:18:36.159090 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:18:36.168164 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:18:36.168594 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:18:36.185242 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:18:36.185376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:36.206410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:18:36.217600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:18:36.217691 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:18:36.244670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:18:36.244741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:36.272613 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:18:36.272691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:36.292593 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:18:36.723442 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:18:36.292672 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:36.316791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:36.343943 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:18:36.344112 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:36.369652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:18:36.369722 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:36.399629 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:18:36.399690 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:36.417572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:18:36.417656 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:18:36.445667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:18:36.445746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:18:36.472664 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:18:36.472890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:36.525539 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:18:36.536559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:18:36.536640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:36.564615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:18:36.564695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:36.586107 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:18:36.586237 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:18:36.603872 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:18:36.603990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:18:36.615015 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:18:36.637495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:18:36.671769 systemd[1]: Switching root. Jan 17 12:18:36.979392 systemd-journald[183]: Journal stopped Jan 17 12:18:29.089452 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:18:29.089500 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:29.089518 kernel: BIOS-provided physical RAM map: Jan 17 12:18:29.089532 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 17 12:18:29.089545 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 17 12:18:29.089558 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 17 12:18:29.089574 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 17 12:18:29.089591 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 17 12:18:29.089605 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 17 12:18:29.089619 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 17 12:18:29.089633 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 17 12:18:29.089647 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 17 12:18:29.089661 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 17 12:18:29.089675 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 17 12:18:29.089696 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 17 12:18:29.089711 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 17 12:18:29.089727 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 17 12:18:29.089742 kernel: NX (Execute Disable) protection: active Jan 17 12:18:29.089757 kernel: APIC: Static calls initialized Jan 17 12:18:29.089773 kernel: efi: EFI v2.7 by EDK II Jan 17 12:18:29.089789 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 17 12:18:29.089804 kernel: SMBIOS 2.4 present. Jan 17 12:18:29.089821 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 17 12:18:29.089836 kernel: Hypervisor detected: KVM Jan 17 12:18:29.089855 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:18:29.089871 kernel: kvm-clock: using sched offset of 12190431958 cycles Jan 17 12:18:29.089889 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:18:29.089905 kernel: tsc: Detected 2299.998 MHz processor Jan 17 12:18:29.089920 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:18:29.089937 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:18:29.089953 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 17 12:18:29.089969 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 17 12:18:29.089986 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:18:29.090005 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 17 12:18:29.090021 kernel: Using GB pages for direct mapping Jan 17 12:18:29.090037 kernel: Secure boot disabled Jan 17 12:18:29.090053 kernel: ACPI: Early table checksum verification disabled Jan 17 12:18:29.090070 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 17 12:18:29.090086 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 17 12:18:29.090103 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 17 12:18:29.090127 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 17 12:18:29.090147 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 17 12:18:29.090164 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 17 12:18:29.090182 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 17 12:18:29.090199 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 17 12:18:29.090216 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 17 12:18:29.090234 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 17 12:18:29.090280 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 17 12:18:29.090300 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 17 12:18:29.090317 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 17 12:18:29.090342 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 17 12:18:29.090359 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 17 12:18:29.090377 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 17 12:18:29.090394 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 17 12:18:29.090411 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 17 12:18:29.090429 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 17 12:18:29.090451 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 17 12:18:29.090469 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:18:29.090486 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:18:29.090504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 17 12:18:29.090521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 17 12:18:29.090539 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 17 12:18:29.090555 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 17 12:18:29.090570 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 17 12:18:29.090587 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 17 12:18:29.090608 kernel: Zone ranges: Jan 17 12:18:29.090625 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:18:29.090643 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:18:29.090658 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:18:29.090676 kernel: Movable zone start for each node Jan 17 12:18:29.090694 kernel: Early memory node ranges Jan 17 12:18:29.090709 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 17 12:18:29.090727 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 17 12:18:29.090744 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 17 12:18:29.090766 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 17 12:18:29.090783 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 17 12:18:29.090800 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 17 12:18:29.090818 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:18:29.090834 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 17 12:18:29.090852 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 17 12:18:29.090868 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 17 12:18:29.090885 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 17 12:18:29.090901 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 17 12:18:29.090917 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:18:29.090938 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:18:29.090954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:18:29.090971 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:18:29.090989 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:18:29.091006 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:18:29.091023 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:18:29.091040 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:18:29.091057 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:18:29.091079 kernel: Booting paravirtualized kernel on KVM Jan 17 12:18:29.091095 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:18:29.091112 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:18:29.091129 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:18:29.091146 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:18:29.091163 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:18:29.091180 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:18:29.091197 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:18:29.091216 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:29.091237 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:18:29.091270 kernel: random: crng init done Jan 17 12:18:29.091286 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:18:29.091300 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:18:29.091316 kernel: Fallback order for Node 0: 0 Jan 17 12:18:29.091339 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 17 12:18:29.091357 kernel: Policy zone: Normal Jan 17 12:18:29.091372 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:18:29.091389 kernel: software IO TLB: area num 2. Jan 17 12:18:29.091412 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 346940K reserved, 0K cma-reserved) Jan 17 12:18:29.091430 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:18:29.091447 kernel: Kernel/User page tables isolation: enabled Jan 17 12:18:29.091465 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:18:29.091483 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:18:29.091500 kernel: Dynamic Preempt: voluntary Jan 17 12:18:29.091518 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:18:29.091538 kernel: rcu: RCU event tracing is enabled. Jan 17 12:18:29.091574 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:18:29.091593 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:18:29.091612 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:18:29.091635 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:18:29.091654 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:18:29.091673 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:18:29.091691 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 17 12:18:29.091709 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:18:29.091729 kernel: Console: colour dummy device 80x25 Jan 17 12:18:29.091752 kernel: printk: console [ttyS0] enabled Jan 17 12:18:29.091771 kernel: ACPI: Core revision 20230628 Jan 17 12:18:29.091790 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:18:29.091809 kernel: x2apic enabled Jan 17 12:18:29.091827 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:18:29.091846 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 17 12:18:29.091866 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:18:29.091885 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 17 12:18:29.091909 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 17 12:18:29.091928 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 17 12:18:29.091946 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:18:29.091965 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 17 12:18:29.091984 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 17 12:18:29.092003 kernel: Spectre V2 : Mitigation: IBRS Jan 17 12:18:29.092022 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:18:29.092042 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:18:29.092060 kernel: RETBleed: Mitigation: IBRS Jan 17 12:18:29.092083 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:18:29.092103 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 17 12:18:29.092121 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:18:29.092140 kernel: MDS: Mitigation: Clear CPU buffers Jan 17 12:18:29.092160 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:18:29.092178 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:18:29.092197 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:18:29.092216 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:18:29.092234 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:18:29.092283 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 17 12:18:29.092303 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:18:29.092323 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:18:29.092349 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:18:29.092368 kernel: landlock: Up and running. Jan 17 12:18:29.092387 kernel: SELinux: Initializing. Jan 17 12:18:29.092407 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.092426 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.092445 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 17 12:18:29.092470 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:29.092489 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:29.092508 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:18:29.092526 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 17 12:18:29.092545 kernel: signal: max sigframe size: 1776 Jan 17 12:18:29.092564 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:18:29.092585 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:18:29.092604 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:18:29.092623 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:18:29.092645 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:18:29.092663 kernel: .... node #0, CPUs: #1 Jan 17 12:18:29.092682 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 17 12:18:29.092701 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:18:29.092719 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:18:29.092737 kernel: smpboot: Max logical packages: 1 Jan 17 12:18:29.092756 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 17 12:18:29.092775 kernel: devtmpfs: initialized Jan 17 12:18:29.092798 kernel: x86/mm: Memory block size: 128MB Jan 17 12:18:29.092816 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 17 12:18:29.092835 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:18:29.092853 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:18:29.092871 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:18:29.092889 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:18:29.092907 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:18:29.092926 kernel: audit: type=2000 audit(1737116308.097:1): state=initialized audit_enabled=0 res=1 Jan 17 12:18:29.092945 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:18:29.092970 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:18:29.093010 kernel: cpuidle: using governor menu Jan 17 12:18:29.093029 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:18:29.093048 kernel: dca service started, version 1.12.1 Jan 17 12:18:29.093067 kernel: PCI: Using configuration type 1 for base access Jan 17 12:18:29.093086 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:18:29.093105 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:18:29.093124 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:18:29.093143 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:18:29.093167 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:18:29.093187 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:18:29.093206 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:18:29.093226 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:18:29.093245 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:18:29.093280 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 17 12:18:29.093298 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:18:29.093316 kernel: ACPI: Interpreter enabled Jan 17 12:18:29.093339 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:18:29.093362 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:18:29.093382 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:18:29.093400 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:18:29.093417 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 17 12:18:29.093436 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:18:29.093703 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:18:29.093917 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 17 12:18:29.094109 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 17 12:18:29.094139 kernel: PCI host bridge to bus 0000:00 Jan 17 12:18:29.094357 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:18:29.094529 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:18:29.094694 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:18:29.094857 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 17 12:18:29.095020 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:18:29.095226 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 17 12:18:29.095469 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 17 12:18:29.095667 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 17 12:18:29.095852 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 17 12:18:29.096040 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 17 12:18:29.096221 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 17 12:18:29.096475 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 17 12:18:29.096663 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:18:29.097134 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 17 12:18:29.097770 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 17 12:18:29.098080 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:18:29.098304 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 17 12:18:29.098501 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 17 12:18:29.098533 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:18:29.098553 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:18:29.098573 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:18:29.098592 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:18:29.098612 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 17 12:18:29.098631 kernel: iommu: Default domain type: Translated Jan 17 12:18:29.098649 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:18:29.098667 kernel: efivars: Registered efivars operations Jan 17 12:18:29.098687 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:18:29.098710 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:18:29.098729 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 17 12:18:29.098749 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 17 12:18:29.098768 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 17 12:18:29.098787 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 17 12:18:29.098806 kernel: vgaarb: loaded Jan 17 12:18:29.098826 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:18:29.098845 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:18:29.098865 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:18:29.098889 kernel: pnp: PnP ACPI init Jan 17 12:18:29.098907 kernel: pnp: PnP ACPI: found 7 devices Jan 17 12:18:29.098928 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:18:29.098947 kernel: NET: Registered PF_INET protocol family Jan 17 12:18:29.098967 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:18:29.098987 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:18:29.099007 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:18:29.099026 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:18:29.099046 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:18:29.099069 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:18:29.099089 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.099108 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:18:29.099128 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:18:29.099147 kernel: NET: Registered PF_XDP protocol family Jan 17 12:18:29.099356 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:18:29.099523 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:18:29.099680 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:18:29.099844 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 17 12:18:29.100030 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 17 12:18:29.100055 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:18:29.100074 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:18:29.100092 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 17 12:18:29.100112 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:18:29.100131 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 17 12:18:29.100150 kernel: clocksource: Switched to clocksource tsc Jan 17 12:18:29.100173 kernel: Initialise system trusted keyrings Jan 17 12:18:29.100192 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:18:29.100210 kernel: Key type asymmetric registered Jan 17 12:18:29.100227 kernel: Asymmetric key parser 'x509' registered Jan 17 12:18:29.100245 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:18:29.100290 kernel: io scheduler mq-deadline registered Jan 17 12:18:29.100309 kernel: io scheduler kyber registered Jan 17 12:18:29.100327 kernel: io scheduler bfq registered Jan 17 12:18:29.100353 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:18:29.100378 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 17 12:18:29.100581 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 17 12:18:29.100605 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 17 12:18:29.100824 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 17 12:18:29.100850 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 17 12:18:29.101039 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 17 12:18:29.101063 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:18:29.101082 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:18:29.101100 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:18:29.101124 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 17 12:18:29.101142 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 17 12:18:29.103505 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 17 12:18:29.103541 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:18:29.103559 kernel: i8042: Warning: Keylock active Jan 17 12:18:29.103577 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:18:29.103597 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:18:29.103792 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 17 12:18:29.103974 kernel: rtc_cmos 00:00: registered as rtc0 Jan 17 12:18:29.104149 kernel: rtc_cmos 00:00: setting system clock to 2025-01-17T12:18:28 UTC (1737116308) Jan 17 12:18:29.104364 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 17 12:18:29.104390 kernel: intel_pstate: CPU model not supported Jan 17 12:18:29.104410 kernel: pstore: Using crash dump compression: deflate Jan 17 12:18:29.104430 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:18:29.104447 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:18:29.104466 kernel: Segment Routing with IPv6 Jan 17 12:18:29.104491 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:18:29.104510 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:18:29.104528 kernel: Key type dns_resolver registered Jan 17 12:18:29.104547 kernel: IPI shorthand broadcast: enabled Jan 17 12:18:29.104566 kernel: sched_clock: Marking stable (852004279, 134709079)->(1026500788, -39787430) Jan 17 12:18:29.104584 kernel: registered taskstats version 1 Jan 17 12:18:29.104603 kernel: Loading compiled-in X.509 certificates Jan 17 12:18:29.104622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:18:29.104640 kernel: Key type .fscrypt registered Jan 17 12:18:29.104661 kernel: Key type fscrypt-provisioning registered Jan 17 12:18:29.104680 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:18:29.104699 kernel: ima: No architecture policies found Jan 17 12:18:29.104719 kernel: clk: Disabling unused clocks Jan 17 12:18:29.104738 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:18:29.104758 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:18:29.104777 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:18:29.104797 kernel: Run /init as init process Jan 17 12:18:29.104815 kernel: with arguments: Jan 17 12:18:29.104837 kernel: /init Jan 17 12:18:29.104857 kernel: with environment: Jan 17 12:18:29.104876 kernel: HOME=/ Jan 17 12:18:29.104895 kernel: TERM=linux Jan 17 12:18:29.104914 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:18:29.104934 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:18:29.104956 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:18:29.104984 systemd[1]: Detected virtualization google. Jan 17 12:18:29.105006 systemd[1]: Detected architecture x86-64. Jan 17 12:18:29.105025 systemd[1]: Running in initrd. Jan 17 12:18:29.105046 systemd[1]: No hostname configured, using default hostname. Jan 17 12:18:29.105065 systemd[1]: Hostname set to . Jan 17 12:18:29.105085 systemd[1]: Initializing machine ID from random generator. Jan 17 12:18:29.105105 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:18:29.105126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:29.105151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:29.105173 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:18:29.105194 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:18:29.105214 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:18:29.105235 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:18:29.105308 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:18:29.105329 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:18:29.105364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:29.105386 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:29.105428 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:18:29.105452 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:18:29.105472 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:18:29.105494 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:18:29.105519 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:18:29.105541 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:18:29.105563 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:18:29.105585 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:18:29.105606 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:29.105628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:29.105649 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:29.105670 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:18:29.105692 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:18:29.105716 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:18:29.105737 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:18:29.105759 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:18:29.105781 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:18:29.105803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:18:29.105861 systemd-journald[183]: Collecting audit messages is disabled. Jan 17 12:18:29.105913 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:29.105935 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:18:29.105956 systemd-journald[183]: Journal started Jan 17 12:18:29.105999 systemd-journald[183]: Runtime Journal (/run/log/journal/9a75a13aec0e4b19b8960724b8829322) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:18:29.115382 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:18:29.119137 systemd-modules-load[184]: Inserted module 'overlay' Jan 17 12:18:29.121897 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:29.123353 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:18:29.135522 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:18:29.140473 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:18:29.159103 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:29.175409 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:18:29.175453 kernel: Bridge firewalling registered Jan 17 12:18:29.163770 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:18:29.177358 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 17 12:18:29.179564 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:29.193521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:18:29.201063 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:29.202554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:29.211502 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:18:29.220376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:29.228722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:29.239674 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:29.245672 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:18:29.251142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:18:29.291292 dracut-cmdline[216]: dracut-dracut-053 Jan 17 12:18:29.296167 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:18:29.323087 systemd-resolved[218]: Positive Trust Anchors: Jan 17 12:18:29.323109 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:18:29.323175 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:18:29.330456 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 17 12:18:29.332247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:18:29.348522 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:29.399310 kernel: SCSI subsystem initialized Jan 17 12:18:29.410306 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:18:29.422288 kernel: iscsi: registered transport (tcp) Jan 17 12:18:29.446314 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:18:29.446399 kernel: QLogic iSCSI HBA Driver Jan 17 12:18:29.498272 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:18:29.505488 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:18:29.547356 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:18:29.547445 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:18:29.547474 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:18:29.593321 kernel: raid6: avx2x4 gen() 17944 MB/s Jan 17 12:18:29.610302 kernel: raid6: avx2x2 gen() 18019 MB/s Jan 17 12:18:29.627688 kernel: raid6: avx2x1 gen() 13918 MB/s Jan 17 12:18:29.627731 kernel: raid6: using algorithm avx2x2 gen() 18019 MB/s Jan 17 12:18:29.645729 kernel: raid6: .... xor() 17471 MB/s, rmw enabled Jan 17 12:18:29.645771 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:18:29.669301 kernel: xor: automatically using best checksumming function avx Jan 17 12:18:29.840306 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:18:29.854067 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:18:29.858496 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:29.887397 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 17 12:18:29.894803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:29.908744 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:18:29.937705 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 12:18:29.975967 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:18:29.985450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:18:30.066918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:30.079508 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:18:30.121106 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:18:30.133575 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:18:30.142409 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:30.146426 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:18:30.156730 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:18:30.194974 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:18:30.198325 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:18:30.228794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:18:30.229310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:30.253713 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:18:30.253799 kernel: AES CTR mode by8 optimization enabled Jan 17 12:18:30.262015 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:30.276306 kernel: scsi host0: Virtio SCSI HBA Jan 17 12:18:30.277537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:18:30.278754 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:30.293377 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 17 12:18:30.293712 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:30.306752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:30.348969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:30.357436 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 17 12:18:30.368502 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 17 12:18:30.368789 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 17 12:18:30.369044 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 17 12:18:30.370343 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 17 12:18:30.370597 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:18:30.370624 kernel: GPT:17805311 != 25165823 Jan 17 12:18:30.370654 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:18:30.370677 kernel: GPT:17805311 != 25165823 Jan 17 12:18:30.370699 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:18:30.370720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:30.370744 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 17 12:18:30.371211 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:18:30.407066 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:30.424309 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (460) Jan 17 12:18:30.439286 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (461) Jan 17 12:18:30.442102 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 17 12:18:30.458794 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 17 12:18:30.459037 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 17 12:18:30.477341 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 17 12:18:30.485821 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:18:30.491487 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:18:30.521888 disk-uuid[550]: Primary Header is updated. Jan 17 12:18:30.521888 disk-uuid[550]: Secondary Entries is updated. Jan 17 12:18:30.521888 disk-uuid[550]: Secondary Header is updated. Jan 17 12:18:30.540279 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:30.567312 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:30.582298 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:31.584291 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:18:31.584385 disk-uuid[551]: The operation has completed successfully. Jan 17 12:18:31.657975 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:18:31.658124 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:18:31.688487 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:18:31.718965 sh[568]: Success Jan 17 12:18:31.741565 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:18:31.826373 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:18:31.833444 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:18:31.863275 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:18:31.900755 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:18:31.900846 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:31.900872 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:18:31.910200 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:18:31.917063 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:18:31.956352 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:18:31.962513 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:18:31.963496 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:18:31.970462 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:18:32.031410 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:32.031471 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:32.031499 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:18:31.999531 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:18:32.078441 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:18:32.078486 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:18:32.078512 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:32.064918 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:18:32.083569 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:18:32.104520 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:18:32.201608 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:18:32.207609 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:18:32.305538 systemd-networkd[751]: lo: Link UP Jan 17 12:18:32.305558 systemd-networkd[751]: lo: Gained carrier Jan 17 12:18:32.308115 systemd-networkd[751]: Enumeration completed Jan 17 12:18:32.315072 ignition[660]: Ignition 2.19.0 Jan 17 12:18:32.308963 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:32.315081 ignition[660]: Stage: fetch-offline Jan 17 12:18:32.308969 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:18:32.315124 ignition[660]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.309384 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:18:32.315134 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.311419 systemd-networkd[751]: eth0: Link UP Jan 17 12:18:32.315292 ignition[660]: parsed url from cmdline: "" Jan 17 12:18:32.311426 systemd-networkd[751]: eth0: Gained carrier Jan 17 12:18:32.315299 ignition[660]: no config URL provided Jan 17 12:18:32.311441 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:32.315309 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:18:32.312714 systemd[1]: Reached target network.target - Network. Jan 17 12:18:32.315324 ignition[660]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:18:32.323341 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:18:32.315335 ignition[660]: failed to fetch config: resource requires networking Jan 17 12:18:32.335703 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:18:32.315655 ignition[660]: Ignition finished successfully Jan 17 12:18:32.360517 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:18:32.405362 ignition[759]: Ignition 2.19.0 Jan 17 12:18:32.415934 unknown[759]: fetched base config from "system" Jan 17 12:18:32.405374 ignition[759]: Stage: fetch Jan 17 12:18:32.415946 unknown[759]: fetched base config from "system" Jan 17 12:18:32.405634 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.415957 unknown[759]: fetched user config from "gcp" Jan 17 12:18:32.405647 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.418546 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:18:32.405780 ignition[759]: parsed url from cmdline: "" Jan 17 12:18:32.425489 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:18:32.405789 ignition[759]: no config URL provided Jan 17 12:18:32.478546 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:18:32.405796 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:18:32.505515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:18:32.405806 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:18:32.551146 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:18:32.405829 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 17 12:18:32.558770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:18:32.409988 ignition[759]: GET result: OK Jan 17 12:18:32.587534 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:18:32.410091 ignition[759]: parsing config with SHA512: 9f6f2460878e032fbee185afcfd4aac9ef9c6fc2b54b5576c1668749761133e4f2431c3ce892b802acae2590e7cdd20bc682d4651e6477b5c10bc1df2dbffcd5 Jan 17 12:18:32.595587 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:18:32.416642 ignition[759]: fetch: fetch complete Jan 17 12:18:32.623550 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:18:32.416657 ignition[759]: fetch: fetch passed Jan 17 12:18:32.629579 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:18:32.416712 ignition[759]: Ignition finished successfully Jan 17 12:18:32.651583 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:18:32.476030 ignition[764]: Ignition 2.19.0 Jan 17 12:18:32.476040 ignition[764]: Stage: kargs Jan 17 12:18:32.476240 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.476274 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.477307 ignition[764]: kargs: kargs passed Jan 17 12:18:32.477364 ignition[764]: Ignition finished successfully Jan 17 12:18:32.548425 ignition[771]: Ignition 2.19.0 Jan 17 12:18:32.548436 ignition[771]: Stage: disks Jan 17 12:18:32.548863 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:32.548879 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:32.549876 ignition[771]: disks: disks passed Jan 17 12:18:32.549940 ignition[771]: Ignition finished successfully Jan 17 12:18:32.711468 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 17 12:18:32.900403 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:18:32.931434 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:18:33.048700 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:18:33.049649 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:18:33.050556 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:18:33.070547 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:18:33.103400 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:18:33.163448 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Jan 17 12:18:33.163500 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:33.163524 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:33.163547 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:18:33.163570 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:18:33.163593 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:18:33.142939 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:18:33.143015 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:18:33.143056 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:18:33.182673 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:18:33.217694 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:18:33.242516 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:18:33.372882 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:18:33.384050 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:18:33.394495 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:18:33.404394 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:18:33.545352 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:18:33.550493 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:18:33.587392 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:33.591474 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:18:33.600477 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:18:33.646281 ignition[899]: INFO : Ignition 2.19.0 Jan 17 12:18:33.646281 ignition[899]: INFO : Stage: mount Jan 17 12:18:33.646281 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:33.646281 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:33.649075 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:18:33.717788 ignition[899]: INFO : mount: mount passed Jan 17 12:18:33.717788 ignition[899]: INFO : Ignition finished successfully Jan 17 12:18:33.663805 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:18:33.687440 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:18:33.783152 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (911) Jan 17 12:18:33.783210 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:18:33.783236 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:18:33.783272 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:18:33.717571 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:18:33.809449 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 17 12:18:33.809507 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:18:33.744426 systemd-networkd[751]: eth0: Gained IPv6LL Jan 17 12:18:33.810926 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:18:33.854535 ignition[928]: INFO : Ignition 2.19.0 Jan 17 12:18:33.854535 ignition[928]: INFO : Stage: files Jan 17 12:18:33.869435 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:33.869435 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:33.869435 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:18:33.869435 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:18:33.869435 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:18:33.869435 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:18:33.865307 unknown[928]: wrote ssh authorized keys file for user: core Jan 17 12:18:34.006428 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:18:34.175636 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:18:34.175636 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.207418 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:18:34.466528 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:18:34.806854 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:18:34.806854 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:18:34.845421 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:18:34.845421 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:18:34.845421 ignition[928]: INFO : files: files passed Jan 17 12:18:34.845421 ignition[928]: INFO : Ignition finished successfully Jan 17 12:18:34.811481 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:18:34.832595 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:18:34.870598 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:18:34.920833 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:18:35.070577 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:35.070577 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:34.920996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:18:35.136446 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:18:34.933619 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:18:34.945802 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:18:34.976503 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:18:35.046992 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:18:35.047124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:18:35.063335 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:18:35.080471 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:18:35.080672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:18:35.084514 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:18:35.146840 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:18:35.168590 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:18:35.204224 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:35.215687 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:35.234742 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:18:35.245755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:18:35.245957 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:18:35.280787 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:18:35.293736 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:18:35.329725 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:18:35.340743 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:18:35.369698 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:18:35.379862 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:18:35.396789 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:18:35.433746 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:18:35.443772 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:18:35.460788 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:18:35.479753 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:18:35.479978 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:18:35.524759 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:35.535754 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:35.553750 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:18:35.553924 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:35.571722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:18:35.571915 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:18:35.694429 ignition[979]: INFO : Ignition 2.19.0 Jan 17 12:18:35.694429 ignition[979]: INFO : Stage: umount Jan 17 12:18:35.694429 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:18:35.694429 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 17 12:18:35.694429 ignition[979]: INFO : umount: umount passed Jan 17 12:18:35.694429 ignition[979]: INFO : Ignition finished successfully Jan 17 12:18:35.615750 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:18:35.615979 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:18:35.625813 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:18:35.625994 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:18:35.652657 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:18:35.702512 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:18:35.702763 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:35.716638 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:18:35.752427 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:18:35.752730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:35.765711 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:18:35.765904 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:18:35.806338 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:18:35.807482 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:18:35.807597 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:18:35.825162 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:18:35.825301 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:18:35.844699 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:18:35.844824 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:18:35.854058 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:18:35.854123 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:18:35.879584 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:18:35.879671 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:18:35.889688 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:18:35.889755 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:18:35.907648 systemd[1]: Stopped target network.target - Network. Jan 17 12:18:35.924587 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:18:35.924686 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:18:35.939681 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:18:35.973425 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:18:35.975341 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:35.983622 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:18:36.017553 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:18:36.036574 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:18:36.036656 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:18:36.044662 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:18:36.044725 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:18:36.060665 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:18:36.060740 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:18:36.077659 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:18:36.077733 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:18:36.095666 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:18:36.095741 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:18:36.112894 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:18:36.122355 systemd-networkd[751]: eth0: DHCPv6 lease lost Jan 17 12:18:36.140758 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:18:36.158956 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:18:36.159090 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:18:36.168164 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:18:36.168594 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:18:36.185242 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:18:36.185376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:36.206410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:18:36.217600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:18:36.217691 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:18:36.244670 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:18:36.244741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:36.272613 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:18:36.272691 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:36.292593 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:18:36.723442 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 17 12:18:36.292672 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:36.316791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:36.343943 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:18:36.344112 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:36.369652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:18:36.369722 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:36.399629 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:18:36.399690 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:36.417572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:18:36.417656 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:18:36.445667 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:18:36.445746 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:18:36.472664 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:18:36.472890 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:18:36.525539 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:18:36.536559 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:18:36.536640 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:36.564615 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:18:36.564695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:36.586107 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:18:36.586237 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:18:36.603872 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:18:36.603990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:18:36.615015 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:18:36.637495 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:18:36.671769 systemd[1]: Switching root. Jan 17 12:18:36.979392 systemd-journald[183]: Journal stopped Jan 17 12:18:39.471406 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:18:39.471457 kernel: SELinux: policy capability open_perms=1 Jan 17 12:18:39.471478 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:18:39.471496 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:18:39.471513 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:18:39.471537 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:18:39.471557 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:18:39.471580 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:18:39.471602 kernel: audit: type=1403 audit(1737116317.323:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:18:39.471624 systemd[1]: Successfully loaded SELinux policy in 90.997ms. Jan 17 12:18:39.471646 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.906ms. Jan 17 12:18:39.471668 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:18:39.471688 systemd[1]: Detected virtualization google. Jan 17 12:18:39.471708 systemd[1]: Detected architecture x86-64. Jan 17 12:18:39.471734 systemd[1]: Detected first boot. Jan 17 12:18:39.471756 systemd[1]: Initializing machine ID from random generator. Jan 17 12:18:39.471778 zram_generator::config[1022]: No configuration found. Jan 17 12:18:39.471800 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:18:39.471821 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:18:39.471845 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:18:39.471866 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:18:39.471888 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:18:39.471909 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:18:39.471930 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:18:39.471952 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:18:39.471973 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:18:39.471999 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:18:39.472020 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:18:39.472041 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:18:39.472064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:18:39.472086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:18:39.472107 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:18:39.472129 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:18:39.472150 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:18:39.472176 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:18:39.472198 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:18:39.472219 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:18:39.472274 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:18:39.472297 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:18:39.472319 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:18:39.472347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:18:39.472369 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:18:39.472392 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:18:39.472418 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:18:39.472440 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:18:39.472462 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:18:39.472485 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:18:39.472507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:18:39.472535 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:18:39.472557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:18:39.472587 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:18:39.472609 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:18:39.472632 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:18:39.472654 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:18:39.472677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:39.472704 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:18:39.472727 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:18:39.472749 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:18:39.472773 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:18:39.472796 systemd[1]: Reached target machines.target - Containers. Jan 17 12:18:39.472819 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:18:39.472842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:18:39.472864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:18:39.472891 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:18:39.472913 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:18:39.472936 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:18:39.472958 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:18:39.472982 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:18:39.473005 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:18:39.473026 kernel: fuse: init (API version 7.39) Jan 17 12:18:39.473046 kernel: ACPI: bus type drm_connector registered Jan 17 12:18:39.473074 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:18:39.473097 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:18:39.473119 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:18:39.473141 kernel: loop: module loaded Jan 17 12:18:39.473162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:18:39.473185 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:18:39.473207 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:18:39.473269 systemd-journald[1109]: Collecting audit messages is disabled. Jan 17 12:18:39.473319 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:18:39.473343 systemd-journald[1109]: Journal started Jan 17 12:18:39.473386 systemd-journald[1109]: Runtime Journal (/run/log/journal/32067c2dbe4a46e99138cd4450613f05) is 8.0M, max 148.7M, 140.7M free. Jan 17 12:18:38.240940 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:18:38.260950 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:18:38.261569 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:18:39.502311 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:18:39.534298 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:18:39.567371 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:18:39.567484 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:18:39.568311 systemd[1]: Stopped verity-setup.service. Jan 17 12:18:39.605295 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:39.615303 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:18:39.625782 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:18:39.635670 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:18:39.646683 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:18:39.656699 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:18:39.666689 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:18:39.676675 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:18:39.686902 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:18:39.698956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:18:39.710943 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:18:39.711189 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:18:39.722939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:18:39.723170 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:18:39.734823 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:18:39.735067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:18:39.745782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:18:39.746013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:18:39.758797 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:18:39.759017 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:18:39.768761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:18:39.768989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:18:39.778777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:18:39.788786 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:18:39.800806 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:18:39.812808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:18:39.837977 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:18:39.854449 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:18:39.879069 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:18:39.889436 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:18:39.889521 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:18:39.900980 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:18:39.924531 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:18:39.936796 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:18:39.947623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:18:39.954671 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:18:39.970820 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:18:39.984146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:18:39.991576 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:18:40.001961 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:18:40.009663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:18:40.018345 systemd-journald[1109]: Time spent on flushing to /var/log/journal/32067c2dbe4a46e99138cd4450613f05 is 71.159ms for 928 entries. Jan 17 12:18:40.018345 systemd-journald[1109]: System Journal (/var/log/journal/32067c2dbe4a46e99138cd4450613f05) is 8.0M, max 584.8M, 576.8M free. Jan 17 12:18:40.152115 systemd-journald[1109]: Received client request to flush runtime journal. Jan 17 12:18:40.152229 kernel: loop0: detected capacity change from 0 to 211296 Jan 17 12:18:40.039823 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:18:40.059542 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:18:40.077483 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:18:40.091307 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:18:40.108682 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:18:40.119816 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:18:40.131878 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:18:40.144337 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:18:40.155085 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:18:40.185283 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:18:40.205673 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:18:40.216961 udevadm[1143]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:18:40.232461 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:18:40.252925 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:18:40.256818 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:18:40.267753 kernel: loop1: detected capacity change from 0 to 54824 Jan 17 12:18:40.280574 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:18:40.301093 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:18:40.349645 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:18:40.365636 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 17 12:18:40.365672 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 17 12:18:40.382894 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:18:40.468313 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:18:40.572344 kernel: loop4: detected capacity change from 0 to 211296 Jan 17 12:18:40.619289 kernel: loop5: detected capacity change from 0 to 54824 Jan 17 12:18:40.653302 kernel: loop6: detected capacity change from 0 to 140768 Jan 17 12:18:40.708896 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 12:18:40.761246 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 17 12:18:40.764947 (sd-merge)[1165]: Merged extensions into '/usr'. Jan 17 12:18:40.778250 systemd[1]: Reloading requested from client PID 1140 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:18:40.778675 systemd[1]: Reloading... Jan 17 12:18:40.950478 zram_generator::config[1190]: No configuration found. Jan 17 12:18:41.219107 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:41.232302 ldconfig[1135]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:18:41.331562 systemd[1]: Reloading finished in 552 ms. Jan 17 12:18:41.357558 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:18:41.367870 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:18:41.392575 systemd[1]: Starting ensure-sysext.service... Jan 17 12:18:41.410417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:18:41.431294 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:18:41.431483 systemd[1]: Reloading... Jan 17 12:18:41.450389 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:18:41.451079 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:18:41.452938 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:18:41.453551 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 17 12:18:41.453685 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Jan 17 12:18:41.460654 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:18:41.460828 systemd-tmpfiles[1232]: Skipping /boot Jan 17 12:18:41.481591 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:18:41.481614 systemd-tmpfiles[1232]: Skipping /boot Jan 17 12:18:41.546204 zram_generator::config[1256]: No configuration found. Jan 17 12:18:41.684834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:18:41.749962 systemd[1]: Reloading finished in 317 ms. Jan 17 12:18:41.769841 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:18:41.792973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:18:41.817675 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:18:41.836434 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:18:41.856633 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:18:41.874597 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:18:41.893687 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:18:41.903222 augenrules[1321]: No rules Jan 17 12:18:41.915527 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:18:41.934537 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:18:41.953810 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:41.954699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:18:41.963001 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Jan 17 12:18:41.966181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:18:41.983628 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:18:42.000916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:18:42.011645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:18:42.020673 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:18:42.030378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:42.037570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:18:42.052425 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:18:42.064183 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:18:42.076195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:18:42.076481 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:18:42.088236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:18:42.089061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:18:42.102522 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:18:42.102767 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:18:42.118565 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:18:42.161956 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:18:42.200056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:42.201696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:18:42.209518 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:18:42.226668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:18:42.242510 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:18:42.261566 systemd-resolved[1316]: Positive Trust Anchors: Jan 17 12:18:42.261595 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:18:42.261661 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:18:42.262528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:18:42.278523 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:18:42.284511 systemd-resolved[1316]: Defaulting to hostname 'linux'. Jan 17 12:18:42.287557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:18:42.313558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:18:42.323467 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:18:42.342535 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:18:42.353399 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:18:42.353463 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:18:42.354492 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:18:42.365088 systemd[1]: Finished ensure-sysext.service. Jan 17 12:18:42.375006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:18:42.376718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:18:42.387958 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:18:42.388227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:18:42.398935 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:18:42.400602 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:18:42.416510 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:18:42.421023 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:18:42.422319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:18:42.440014 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:18:42.455318 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 17 12:18:42.474542 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 17 12:18:42.486369 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:18:42.497322 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:18:42.517500 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 17 12:18:42.521949 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:18:42.527335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:18:42.541287 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:18:42.548568 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 17 12:18:42.557306 kernel: ACPI: button: Sleep Button [SLPF] Jan 17 12:18:42.563435 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:18:42.563842 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:18:42.573303 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:18:42.628298 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1342) Jan 17 12:18:42.661060 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:18:42.687131 systemd-networkd[1370]: lo: Link UP Jan 17 12:18:42.688510 systemd-networkd[1370]: lo: Gained carrier Jan 17 12:18:42.690922 systemd-networkd[1370]: Enumeration completed Jan 17 12:18:42.691618 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:42.691626 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:18:42.692458 systemd-networkd[1370]: eth0: Link UP Jan 17 12:18:42.692466 systemd-networkd[1370]: eth0: Gained carrier Jan 17 12:18:42.692493 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:18:42.693702 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:18:42.702365 systemd-networkd[1370]: eth0: DHCPv4 address 10.128.0.67/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 17 12:18:42.704343 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 17 12:18:42.715960 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:18:42.721807 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 17 12:18:42.723169 systemd[1]: Reached target network.target - Network. Jan 17 12:18:42.730596 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:18:42.733757 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:18:42.737531 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:18:42.762171 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:18:42.779774 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:18:42.808890 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:18:42.809568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:18:42.819555 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:18:42.829280 lvm[1414]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:18:42.837137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:18:42.849588 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:18:42.860725 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:18:42.873535 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:18:42.885712 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:18:42.895593 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:18:42.906440 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:18:42.917417 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:18:42.917478 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:18:42.926405 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:18:42.936155 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:18:42.948133 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:18:42.962105 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:18:42.972471 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:18:42.983729 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:18:42.994240 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:18:43.004493 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:18:43.013561 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:18:43.013614 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:18:43.021447 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:18:43.033205 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:18:43.050460 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:18:43.072439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:18:43.096528 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:18:43.105947 jq[1423]: false Jan 17 12:18:43.106417 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:18:43.115979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:18:43.135215 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:18:43.151439 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:18:43.157412 coreos-metadata[1421]: Jan 17 12:18:43.156 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 17 12:18:43.159898 coreos-metadata[1421]: Jan 17 12:18:43.159 INFO Fetch successful Jan 17 12:18:43.159898 coreos-metadata[1421]: Jan 17 12:18:43.159 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 17 12:18:43.161253 coreos-metadata[1421]: Jan 17 12:18:43.160 INFO Fetch successful Jan 17 12:18:43.161253 coreos-metadata[1421]: Jan 17 12:18:43.160 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 17 12:18:43.161253 coreos-metadata[1421]: Jan 17 12:18:43.160 INFO Fetch successful Jan 17 12:18:43.161253 coreos-metadata[1421]: Jan 17 12:18:43.160 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 17 12:18:43.163759 coreos-metadata[1421]: Jan 17 12:18:43.162 INFO Fetch successful Jan 17 12:18:43.170534 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:18:43.190535 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:18:43.198323 extend-filesystems[1424]: Found loop4 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found loop5 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found loop6 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found loop7 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda1 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda2 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda3 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found usr Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda4 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda6 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda7 Jan 17 12:18:43.208739 extend-filesystems[1424]: Found sda9 Jan 17 12:18:43.208739 extend-filesystems[1424]: Checking size of /dev/sda9 Jan 17 12:18:43.447809 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 17 12:18:43.447867 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 17 12:18:43.447903 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1343) Jan 17 12:18:43.203871 dbus-daemon[1422]: [system] SELinux support is enabled Jan 17 12:18:43.448618 extend-filesystems[1424]: Resized partition /dev/sda9 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: ---------------------------------------------------- Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: available at https://www.nwtime.org/support Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: ---------------------------------------------------- Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: proto: precision = 0.110 usec (-23) Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: basedate set to 2025-01-05 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: gps base set to 2025-01-05 (week 2348) Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Listen normally on 3 eth0 10.128.0.67:123 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:43%2 Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:43.468425 ntpd[1428]: 17 Jan 12:18:43 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:43.210587 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:18:43.207431 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1370 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:18:43.482854 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:18:43.482854 extend-filesystems[1446]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 17 12:18:43.482854 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 12:18:43.482854 extend-filesystems[1446]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 17 12:18:43.255032 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 17 12:18:43.218246 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:35 UTC 2025 (1): Starting Jan 17 12:18:43.539502 extend-filesystems[1424]: Resized filesystem in /dev/sda9 Jan 17 12:18:43.255867 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:18:43.218316 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:18:43.267479 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:18:43.218333 ntpd[1428]: ---------------------------------------------------- Jan 17 12:18:43.273472 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:18:43.550734 jq[1450]: true Jan 17 12:18:43.218348 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:18:43.307587 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:18:43.218362 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:18:43.334874 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:18:43.554690 update_engine[1449]: I20250117 12:18:43.455881 1449 main.cc:92] Flatcar Update Engine starting Jan 17 12:18:43.554690 update_engine[1449]: I20250117 12:18:43.457758 1449 update_check_scheduler.cc:74] Next update check in 7m59s Jan 17 12:18:43.218376 ntpd[1428]: corporation. Support and training for ntp-4 are Jan 17 12:18:43.335985 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:18:43.218390 ntpd[1428]: available at https://www.nwtime.org/support Jan 17 12:18:43.337503 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:18:43.218404 ntpd[1428]: ---------------------------------------------------- Jan 17 12:18:43.338113 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:18:43.556964 jq[1462]: true Jan 17 12:18:43.220489 ntpd[1428]: proto: precision = 0.110 usec (-23) Jan 17 12:18:43.375119 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:18:43.221759 ntpd[1428]: basedate set to 2025-01-05 Jan 17 12:18:43.375410 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:18:43.221783 ntpd[1428]: gps base set to 2025-01-05 (week 2348) Jan 17 12:18:43.395907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:18:43.224379 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:18:43.396784 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:18:43.224440 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:18:43.410876 systemd-logind[1441]: Watching system buttons on /dev/input/event2 (Power Button) Jan 17 12:18:43.224689 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:18:43.410911 systemd-logind[1441]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 17 12:18:43.224747 ntpd[1428]: Listen normally on 3 eth0 10.128.0.67:123 Jan 17 12:18:43.410944 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:18:43.224804 ntpd[1428]: Listen normally on 4 lo [::1]:123 Jan 17 12:18:43.411283 systemd-logind[1441]: New seat seat0. Jan 17 12:18:43.224866 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:43%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 12:18:43.429730 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:18:43.224898 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:43%2#123 Jan 17 12:18:43.535903 (ntainerd)[1463]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:18:43.224921 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:43%2 Jan 17 12:18:43.544063 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:18:43.224967 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Jan 17 12:18:43.560463 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:18:43.226576 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:43.226610 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:18:43.503620 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:18:43.580601 tar[1458]: linux-amd64/helm Jan 17 12:18:43.596071 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:18:43.608738 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:18:43.609937 bash[1487]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:18:43.609476 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:18:43.609718 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:18:43.635698 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:18:43.645454 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:18:43.646028 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:18:43.682817 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:18:43.704138 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:18:43.733116 systemd[1]: Starting sshkeys.service... Jan 17 12:18:43.792601 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:18:43.814330 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:18:43.843438 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.962 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.962 INFO Fetch failed with 404: resource not found Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetch successful Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetch failed with 404: resource not found Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetch failed with 404: resource not found Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 17 12:18:43.963625 coreos-metadata[1495]: Jan 17 12:18:43.963 INFO Fetch successful Jan 17 12:18:43.967226 unknown[1495]: wrote ssh authorized keys file for user: core Jan 17 12:18:44.011040 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:18:44.015161 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:18:44.015862 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1491 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:18:44.016228 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:18:44.022447 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:18:44.026053 update-ssh-keys[1510]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:18:44.033492 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:18:44.048875 systemd[1]: Finished sshkeys.service. Jan 17 12:18:44.052493 systemd-networkd[1370]: eth0: Gained IPv6LL Jan 17 12:18:44.059934 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:18:44.081076 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:18:44.104697 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:18:44.124512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:44.142644 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:18:44.154684 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 17 12:18:44.174683 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:18:44.194493 systemd[1]: Started sshd@0-10.128.0.67:22-139.178.89.65:39448.service - OpenSSH per-connection server daemon (139.178.89.65:39448). Jan 17 12:18:44.213778 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:18:44.214043 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:18:44.222032 init.sh[1526]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 17 12:18:44.222032 init.sh[1526]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 17 12:18:44.222032 init.sh[1526]: + /usr/bin/google_instance_setup Jan 17 12:18:44.247690 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:18:44.312874 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:18:44.350393 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:18:44.359623 polkitd[1527]: Started polkitd version 121 Jan 17 12:18:44.369648 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:18:44.388006 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:18:44.401744 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:18:44.411076 polkitd[1527]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:18:44.411188 polkitd[1527]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:18:44.422445 polkitd[1527]: Finished loading, compiling and executing 2 rules Jan 17 12:18:44.433869 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:18:44.434658 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:18:44.438633 polkitd[1527]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:18:44.496944 containerd[1463]: time="2025-01-17T12:18:44.496562991Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:18:44.497484 systemd-resolved[1316]: System hostname changed to 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal'. Jan 17 12:18:44.497623 systemd-hostnamed[1491]: Hostname set to (transient) Jan 17 12:18:44.584492 containerd[1463]: time="2025-01-17T12:18:44.584201622Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:44.595361 containerd[1463]: time="2025-01-17T12:18:44.595296569Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:44.595561 containerd[1463]: time="2025-01-17T12:18:44.595535354Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:18:44.595662 containerd[1463]: time="2025-01-17T12:18:44.595644081Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:18:44.595988 containerd[1463]: time="2025-01-17T12:18:44.595944846Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:18:44.596112 containerd[1463]: time="2025-01-17T12:18:44.596092732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:44.596340 containerd[1463]: time="2025-01-17T12:18:44.596296853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:44.597284 containerd[1463]: time="2025-01-17T12:18:44.596442791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:44.597284 containerd[1463]: time="2025-01-17T12:18:44.596722441Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:44.597284 containerd[1463]: time="2025-01-17T12:18:44.596751511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:44.597284 containerd[1463]: time="2025-01-17T12:18:44.596775020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:44.597284 containerd[1463]: time="2025-01-17T12:18:44.596792008Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:44.597284 containerd[1463]: time="2025-01-17T12:18:44.596914193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:44.599276 containerd[1463]: time="2025-01-17T12:18:44.597251089Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:18:44.599621 containerd[1463]: time="2025-01-17T12:18:44.599587219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:18:44.599766 containerd[1463]: time="2025-01-17T12:18:44.599740261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:18:44.600017 containerd[1463]: time="2025-01-17T12:18:44.599987904Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:18:44.600193 containerd[1463]: time="2025-01-17T12:18:44.600168964Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:18:44.610050 containerd[1463]: time="2025-01-17T12:18:44.609650129Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:18:44.610050 containerd[1463]: time="2025-01-17T12:18:44.609727761Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:18:44.610050 containerd[1463]: time="2025-01-17T12:18:44.609773849Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:18:44.610050 containerd[1463]: time="2025-01-17T12:18:44.609801868Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:18:44.610050 containerd[1463]: time="2025-01-17T12:18:44.609828686Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:18:44.610050 containerd[1463]: time="2025-01-17T12:18:44.610024232Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:18:44.612548 containerd[1463]: time="2025-01-17T12:18:44.612489799Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:18:44.613053 containerd[1463]: time="2025-01-17T12:18:44.612929152Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:18:44.613053 containerd[1463]: time="2025-01-17T12:18:44.612985298Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:18:44.613053 containerd[1463]: time="2025-01-17T12:18:44.613011183Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613067216Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613091355Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613134756Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613159481Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613221772Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613245275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613292095Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613314425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613364826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613386863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613409096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613451263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613472786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.615919 containerd[1463]: time="2025-01-17T12:18:44.613495496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613535153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613559012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613580660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613625677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613647018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613694670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613718305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613744900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613808132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613853692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613874110Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613958566Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.613991729Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:18:44.616541 containerd[1463]: time="2025-01-17T12:18:44.614110862Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:18:44.617121 containerd[1463]: time="2025-01-17T12:18:44.614143483Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:18:44.617121 containerd[1463]: time="2025-01-17T12:18:44.614181219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.617121 containerd[1463]: time="2025-01-17T12:18:44.614220226Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:18:44.617121 containerd[1463]: time="2025-01-17T12:18:44.614267576Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:18:44.617121 containerd[1463]: time="2025-01-17T12:18:44.614287179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:18:44.618169 containerd[1463]: time="2025-01-17T12:18:44.614875873Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:18:44.618169 containerd[1463]: time="2025-01-17T12:18:44.615008950Z" level=info msg="Connect containerd service" Jan 17 12:18:44.618169 containerd[1463]: time="2025-01-17T12:18:44.615077883Z" level=info msg="using legacy CRI server" Jan 17 12:18:44.618169 containerd[1463]: time="2025-01-17T12:18:44.615089883Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:18:44.618169 containerd[1463]: time="2025-01-17T12:18:44.617395056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:18:44.620865 containerd[1463]: time="2025-01-17T12:18:44.619491018Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:18:44.620865 containerd[1463]: time="2025-01-17T12:18:44.619720592Z" level=info msg="Start subscribing containerd event" Jan 17 12:18:44.620865 containerd[1463]: time="2025-01-17T12:18:44.619794278Z" level=info msg="Start recovering state" Jan 17 12:18:44.620865 containerd[1463]: time="2025-01-17T12:18:44.619886412Z" level=info msg="Start event monitor" Jan 17 12:18:44.620865 containerd[1463]: time="2025-01-17T12:18:44.619907996Z" level=info msg="Start snapshots syncer" Jan 17 12:18:44.620865 containerd[1463]: time="2025-01-17T12:18:44.619921993Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:18:44.620865 containerd[1463]: time="2025-01-17T12:18:44.619936108Z" level=info msg="Start streaming server" Jan 17 12:18:44.621167 containerd[1463]: time="2025-01-17T12:18:44.621001585Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:18:44.621167 containerd[1463]: time="2025-01-17T12:18:44.621081723Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:18:44.621302 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:18:44.623672 containerd[1463]: time="2025-01-17T12:18:44.623622520Z" level=info msg="containerd successfully booted in 0.130318s" Jan 17 12:18:44.715122 sshd[1528]: Accepted publickey for core from 139.178.89.65 port 39448 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:18:44.716125 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:44.736232 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:18:44.755727 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:18:44.773345 systemd-logind[1441]: New session 1 of user core. Jan 17 12:18:44.805369 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:18:44.827551 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:18:44.876784 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:18:44.893801 tar[1458]: linux-amd64/LICENSE Jan 17 12:18:44.893801 tar[1458]: linux-amd64/README.md Jan 17 12:18:44.916755 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:18:45.121225 systemd[1557]: Queued start job for default target default.target. Jan 17 12:18:45.129985 systemd[1557]: Created slice app.slice - User Application Slice. Jan 17 12:18:45.130030 systemd[1557]: Reached target paths.target - Paths. Jan 17 12:18:45.130058 systemd[1557]: Reached target timers.target - Timers. Jan 17 12:18:45.133381 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:18:45.162898 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:18:45.164004 systemd[1557]: Reached target sockets.target - Sockets. Jan 17 12:18:45.164036 systemd[1557]: Reached target basic.target - Basic System. Jan 17 12:18:45.164112 systemd[1557]: Reached target default.target - Main User Target. Jan 17 12:18:45.164168 systemd[1557]: Startup finished in 261ms. Jan 17 12:18:45.164934 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:18:45.186536 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:18:45.245128 instance-setup[1531]: INFO Running google_set_multiqueue. Jan 17 12:18:45.265504 instance-setup[1531]: INFO Set channels for eth0 to 2. Jan 17 12:18:45.270041 instance-setup[1531]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jan 17 12:18:45.272471 instance-setup[1531]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jan 17 12:18:45.272569 instance-setup[1531]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jan 17 12:18:45.274427 instance-setup[1531]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jan 17 12:18:45.275084 instance-setup[1531]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jan 17 12:18:45.277227 instance-setup[1531]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jan 17 12:18:45.278129 instance-setup[1531]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jan 17 12:18:45.279834 instance-setup[1531]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jan 17 12:18:45.289886 instance-setup[1531]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 12:18:45.295142 instance-setup[1531]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 17 12:18:45.297173 instance-setup[1531]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 17 12:18:45.297242 instance-setup[1531]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 17 12:18:45.317628 init.sh[1526]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 17 12:18:45.517229 startup-script[1599]: INFO Starting startup scripts. Jan 17 12:18:45.523115 startup-script[1599]: INFO No startup scripts found in metadata. Jan 17 12:18:45.523192 startup-script[1599]: INFO Finished running startup scripts. Jan 17 12:18:45.545859 init.sh[1526]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 17 12:18:45.545859 init.sh[1526]: + daemon_pids=() Jan 17 12:18:45.546042 init.sh[1526]: + for d in accounts clock_skew network Jan 17 12:18:45.546575 init.sh[1526]: + daemon_pids+=($!) Jan 17 12:18:45.546575 init.sh[1526]: + for d in accounts clock_skew network Jan 17 12:18:45.546713 init.sh[1603]: + /usr/bin/google_accounts_daemon Jan 17 12:18:45.547116 init.sh[1526]: + daemon_pids+=($!) Jan 17 12:18:45.547116 init.sh[1526]: + for d in accounts clock_skew network Jan 17 12:18:45.547116 init.sh[1526]: + daemon_pids+=($!) Jan 17 12:18:45.547116 init.sh[1526]: + NOTIFY_SOCKET=/run/systemd/notify Jan 17 12:18:45.547116 init.sh[1526]: + /usr/bin/systemd-notify --ready Jan 17 12:18:45.547696 init.sh[1605]: + /usr/bin/google_network_daemon Jan 17 12:18:45.548103 init.sh[1604]: + /usr/bin/google_clock_skew_daemon Jan 17 12:18:45.579781 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 17 12:18:45.596418 init.sh[1526]: + wait -n 1603 1604 1605 Jan 17 12:18:45.858974 google-networking[1605]: INFO Starting Google Networking daemon. Jan 17 12:18:45.920797 google-clock-skew[1604]: INFO Starting Google Clock Skew daemon. Jan 17 12:18:45.935155 google-clock-skew[1604]: INFO Clock drift token has changed: 0. Jan 17 12:18:45.979196 groupadd[1615]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 17 12:18:45.983063 groupadd[1615]: group added to /etc/gshadow: name=google-sudoers Jan 17 12:18:46.035038 groupadd[1615]: new group: name=google-sudoers, GID=1000 Jan 17 12:18:46.066176 google-accounts[1603]: INFO Starting Google Accounts daemon. Jan 17 12:18:46.079703 google-accounts[1603]: WARNING OS Login not installed. Jan 17 12:18:46.081682 google-accounts[1603]: INFO Creating a new user account for 0. Jan 17 12:18:46.086211 init.sh[1623]: useradd: invalid user name '0': use --badname to ignore Jan 17 12:18:46.086460 google-accounts[1603]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 17 12:18:46.218865 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:43%2]:123 Jan 17 12:18:46.219644 ntpd[1428]: 17 Jan 12:18:46 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:43%2]:123 Jan 17 12:18:46.373960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:46.386276 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:18:46.386864 (kubelet)[1630]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:46.396996 systemd[1]: Startup finished in 1.027s (kernel) + 8.543s (initrd) + 9.152s (userspace) = 18.723s. Jan 17 12:18:47.000853 systemd-resolved[1316]: Clock change detected. Flushing caches. Jan 17 12:18:47.001201 google-clock-skew[1604]: INFO Synced system time with hardware clock. Jan 17 12:18:47.810224 kubelet[1630]: E0117 12:18:47.810083 1630 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:47.813196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:47.813462 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:47.814038 systemd[1]: kubelet.service: Consumed 1.299s CPU time. Jan 17 12:18:50.751556 systemd[1]: Started sshd@1-10.128.0.67:22-139.178.89.65:39468.service - OpenSSH per-connection server daemon (139.178.89.65:39468). Jan 17 12:18:51.041735 sshd[1643]: Accepted publickey for core from 139.178.89.65 port 39468 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:18:51.043606 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:51.050217 systemd-logind[1441]: New session 2 of user core. Jan 17 12:18:51.060422 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:18:51.255905 sshd[1643]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:51.261568 systemd[1]: sshd@1-10.128.0.67:22-139.178.89.65:39468.service: Deactivated successfully. Jan 17 12:18:51.263849 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:18:51.264839 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:18:51.266576 systemd-logind[1441]: Removed session 2. Jan 17 12:18:51.311544 systemd[1]: Started sshd@2-10.128.0.67:22-139.178.89.65:45288.service - OpenSSH per-connection server daemon (139.178.89.65:45288). Jan 17 12:18:51.597992 sshd[1650]: Accepted publickey for core from 139.178.89.65 port 45288 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:18:51.599971 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:51.606372 systemd-logind[1441]: New session 3 of user core. Jan 17 12:18:51.613365 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:18:51.808296 sshd[1650]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:51.813761 systemd[1]: sshd@2-10.128.0.67:22-139.178.89.65:45288.service: Deactivated successfully. Jan 17 12:18:51.815998 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:18:51.817000 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:18:51.818595 systemd-logind[1441]: Removed session 3. Jan 17 12:18:53.547557 systemd[1]: Started sshd@3-10.128.0.67:22-115.91.91.182:34086.service - OpenSSH per-connection server daemon (115.91.91.182:34086). Jan 17 12:18:54.505106 sshd[1657]: Invalid user reza from 115.91.91.182 port 34086 Jan 17 12:18:54.683326 sshd[1657]: Received disconnect from 115.91.91.182 port 34086:11: Bye Bye [preauth] Jan 17 12:18:54.683326 sshd[1657]: Disconnected from invalid user reza 115.91.91.182 port 34086 [preauth] Jan 17 12:18:54.686434 systemd[1]: sshd@3-10.128.0.67:22-115.91.91.182:34086.service: Deactivated successfully. Jan 17 12:18:56.864530 systemd[1]: Started sshd@4-10.128.0.67:22-139.178.89.65:45308.service - OpenSSH per-connection server daemon (139.178.89.65:45308). Jan 17 12:18:57.157629 sshd[1662]: Accepted publickey for core from 139.178.89.65 port 45308 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:18:57.159574 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:18:57.165209 systemd-logind[1441]: New session 4 of user core. Jan 17 12:18:57.172427 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:18:57.371329 sshd[1662]: pam_unix(sshd:session): session closed for user core Jan 17 12:18:57.376715 systemd[1]: sshd@4-10.128.0.67:22-139.178.89.65:45308.service: Deactivated successfully. Jan 17 12:18:57.378936 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:18:57.379900 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:18:57.381470 systemd-logind[1441]: Removed session 4. Jan 17 12:18:57.635564 systemd[1]: Started sshd@5-10.128.0.67:22-51.178.141.222:46888.service - OpenSSH per-connection server daemon (51.178.141.222:46888). Jan 17 12:18:57.786566 systemd[1]: Started sshd@6-10.128.0.67:22-85.190.243.197:34854.service - OpenSSH per-connection server daemon (85.190.243.197:34854). Jan 17 12:18:58.063782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:18:58.069461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:18:58.248828 sshd[1669]: Invalid user acer from 51.178.141.222 port 46888 Jan 17 12:18:58.341284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:18:58.350682 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:18:58.367114 sshd[1669]: Received disconnect from 51.178.141.222 port 46888:11: Bye Bye [preauth] Jan 17 12:18:58.367114 sshd[1669]: Disconnected from invalid user acer 51.178.141.222 port 46888 [preauth] Jan 17 12:18:58.370735 systemd[1]: sshd@5-10.128.0.67:22-51.178.141.222:46888.service: Deactivated successfully. Jan 17 12:18:58.429109 kubelet[1682]: E0117 12:18:58.429050 1682 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:18:58.433980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:18:58.434266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:18:59.829001 sshd[1672]: Received disconnect from 85.190.243.197 port 34854:11: Bye Bye [preauth] Jan 17 12:18:59.829001 sshd[1672]: Disconnected from authenticating user root 85.190.243.197 port 34854 [preauth] Jan 17 12:18:59.832060 systemd[1]: sshd@6-10.128.0.67:22-85.190.243.197:34854.service: Deactivated successfully. Jan 17 12:19:02.426559 systemd[1]: Started sshd@7-10.128.0.67:22-139.178.89.65:58214.service - OpenSSH per-connection server daemon (139.178.89.65:58214). Jan 17 12:19:02.720800 sshd[1696]: Accepted publickey for core from 139.178.89.65 port 58214 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:19:02.722705 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:02.729100 systemd-logind[1441]: New session 5 of user core. Jan 17 12:19:02.739449 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:19:02.914874 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:19:02.915398 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:02.931041 sudo[1699]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:02.974312 sshd[1696]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:02.980551 systemd[1]: sshd@7-10.128.0.67:22-139.178.89.65:58214.service: Deactivated successfully. Jan 17 12:19:02.982805 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:19:02.983778 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:19:02.985392 systemd-logind[1441]: Removed session 5. Jan 17 12:19:03.029546 systemd[1]: Started sshd@8-10.128.0.67:22-139.178.89.65:58216.service - OpenSSH per-connection server daemon (139.178.89.65:58216). Jan 17 12:19:03.322612 sshd[1704]: Accepted publickey for core from 139.178.89.65 port 58216 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:19:03.324268 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:03.331235 systemd-logind[1441]: New session 6 of user core. Jan 17 12:19:03.340413 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:19:03.500893 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:19:03.501490 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:03.506722 sudo[1708]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:03.520762 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:19:03.521293 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:03.538556 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:03.541987 auditctl[1711]: No rules Jan 17 12:19:03.542518 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:19:03.542818 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:03.546077 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:19:03.593934 augenrules[1729]: No rules Jan 17 12:19:03.595041 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:19:03.596954 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:03.639740 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:03.644078 systemd[1]: sshd@8-10.128.0.67:22-139.178.89.65:58216.service: Deactivated successfully. Jan 17 12:19:03.646426 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:19:03.648299 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:19:03.649677 systemd-logind[1441]: Removed session 6. Jan 17 12:19:03.698593 systemd[1]: Started sshd@9-10.128.0.67:22-139.178.89.65:58222.service - OpenSSH per-connection server daemon (139.178.89.65:58222). Jan 17 12:19:03.988761 sshd[1737]: Accepted publickey for core from 139.178.89.65 port 58222 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:19:03.990660 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:03.996874 systemd-logind[1441]: New session 7 of user core. Jan 17 12:19:04.012456 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:19:04.170195 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:19:04.170680 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:19:04.615539 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:19:04.627766 (dockerd)[1756]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:19:05.067891 dockerd[1756]: time="2025-01-17T12:19:05.067715462Z" level=info msg="Starting up" Jan 17 12:19:05.232897 dockerd[1756]: time="2025-01-17T12:19:05.232812879Z" level=info msg="Loading containers: start." Jan 17 12:19:05.381357 kernel: Initializing XFRM netlink socket Jan 17 12:19:05.491081 systemd-networkd[1370]: docker0: Link UP Jan 17 12:19:05.510022 dockerd[1756]: time="2025-01-17T12:19:05.509956982Z" level=info msg="Loading containers: done." Jan 17 12:19:05.533652 dockerd[1756]: time="2025-01-17T12:19:05.533592331Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:19:05.533984 dockerd[1756]: time="2025-01-17T12:19:05.533755593Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:19:05.533984 dockerd[1756]: time="2025-01-17T12:19:05.533919387Z" level=info msg="Daemon has completed initialization" Jan 17 12:19:05.571710 dockerd[1756]: time="2025-01-17T12:19:05.570961277Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:19:05.571401 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:19:08.528438 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:19:08.536596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:08.742646 containerd[1463]: time="2025-01-17T12:19:08.742538019Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:19:08.911909 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:08.922800 (kubelet)[1906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:08.997820 kubelet[1906]: E0117 12:19:08.997765 1906 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:09.000245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:09.000480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:09.362382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165367323.mount: Deactivated successfully. Jan 17 12:19:11.120072 containerd[1463]: time="2025-01-17T12:19:11.120000605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:11.121862 containerd[1463]: time="2025-01-17T12:19:11.121756263Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35147358" Jan 17 12:19:11.123046 containerd[1463]: time="2025-01-17T12:19:11.122962638Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:11.127572 containerd[1463]: time="2025-01-17T12:19:11.127471165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:11.130105 containerd[1463]: time="2025-01-17T12:19:11.129841689Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.386168478s" Jan 17 12:19:11.130105 containerd[1463]: time="2025-01-17T12:19:11.129895664Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:19:11.159823 containerd[1463]: time="2025-01-17T12:19:11.159761962Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:19:12.887074 containerd[1463]: time="2025-01-17T12:19:12.887002166Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:12.888681 containerd[1463]: time="2025-01-17T12:19:12.888600079Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32218575" Jan 17 12:19:12.890080 containerd[1463]: time="2025-01-17T12:19:12.890009018Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:12.893777 containerd[1463]: time="2025-01-17T12:19:12.893710283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:12.895194 containerd[1463]: time="2025-01-17T12:19:12.895114632Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.735293919s" Jan 17 12:19:12.895303 containerd[1463]: time="2025-01-17T12:19:12.895202857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:19:12.926706 containerd[1463]: time="2025-01-17T12:19:12.926621835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:19:13.956254 containerd[1463]: time="2025-01-17T12:19:13.956181754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:13.957902 containerd[1463]: time="2025-01-17T12:19:13.957832719Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17334757" Jan 17 12:19:13.959005 containerd[1463]: time="2025-01-17T12:19:13.958932372Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:13.962450 containerd[1463]: time="2025-01-17T12:19:13.962378250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:13.964102 containerd[1463]: time="2025-01-17T12:19:13.963854323Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.037178751s" Jan 17 12:19:13.964102 containerd[1463]: time="2025-01-17T12:19:13.963903596Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:19:13.995609 containerd[1463]: time="2025-01-17T12:19:13.995561423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:19:14.860236 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:19:15.138150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount599907577.mount: Deactivated successfully. Jan 17 12:19:15.683064 containerd[1463]: time="2025-01-17T12:19:15.682986187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.684374 containerd[1463]: time="2025-01-17T12:19:15.684308380Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28622836" Jan 17 12:19:15.685779 containerd[1463]: time="2025-01-17T12:19:15.685715359Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.688375 containerd[1463]: time="2025-01-17T12:19:15.688315015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:15.689848 containerd[1463]: time="2025-01-17T12:19:15.689235328Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.693493351s" Jan 17 12:19:15.689848 containerd[1463]: time="2025-01-17T12:19:15.689285097Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:19:15.720828 containerd[1463]: time="2025-01-17T12:19:15.720766345Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:19:16.157205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2970972514.mount: Deactivated successfully. Jan 17 12:19:17.230623 containerd[1463]: time="2025-01-17T12:19:17.230534925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.234191 containerd[1463]: time="2025-01-17T12:19:17.234045284Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 17 12:19:17.238168 containerd[1463]: time="2025-01-17T12:19:17.236261962Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.242281 containerd[1463]: time="2025-01-17T12:19:17.242230627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.244021 containerd[1463]: time="2025-01-17T12:19:17.243737017Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.52291519s" Jan 17 12:19:17.244021 containerd[1463]: time="2025-01-17T12:19:17.243793550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:19:17.274465 containerd[1463]: time="2025-01-17T12:19:17.274417261Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:19:17.656461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789861406.mount: Deactivated successfully. Jan 17 12:19:17.663546 containerd[1463]: time="2025-01-17T12:19:17.663479086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.664720 containerd[1463]: time="2025-01-17T12:19:17.664646847Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 17 12:19:17.666319 containerd[1463]: time="2025-01-17T12:19:17.666247774Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.669713 containerd[1463]: time="2025-01-17T12:19:17.669644602Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:17.670814 containerd[1463]: time="2025-01-17T12:19:17.670642503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 396.173063ms" Jan 17 12:19:17.670814 containerd[1463]: time="2025-01-17T12:19:17.670690212Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:19:17.703687 containerd[1463]: time="2025-01-17T12:19:17.703640803Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:19:18.102777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2476741341.mount: Deactivated successfully. Jan 17 12:19:19.028407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:19:19.039525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:19.348427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:19.362189 (kubelet)[2114]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:19:19.467122 kubelet[2114]: E0117 12:19:19.467031 2114 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:19:19.471917 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:19:19.473031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:19:20.438580 containerd[1463]: time="2025-01-17T12:19:20.438494823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:20.440262 containerd[1463]: time="2025-01-17T12:19:20.440193343Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Jan 17 12:19:20.441645 containerd[1463]: time="2025-01-17T12:19:20.441564050Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:20.445493 containerd[1463]: time="2025-01-17T12:19:20.445410673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:20.447187 containerd[1463]: time="2025-01-17T12:19:20.446955973Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.74326287s" Jan 17 12:19:20.447187 containerd[1463]: time="2025-01-17T12:19:20.447005174Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:19:23.293314 systemd[1]: Started sshd@10-10.128.0.67:22-51.178.141.222:56360.service - OpenSSH per-connection server daemon (51.178.141.222:56360). Jan 17 12:19:23.910580 sshd[2181]: Invalid user demo from 51.178.141.222 port 56360 Jan 17 12:19:24.031168 sshd[2181]: Received disconnect from 51.178.141.222 port 56360:11: Bye Bye [preauth] Jan 17 12:19:24.031168 sshd[2181]: Disconnected from invalid user demo 51.178.141.222 port 56360 [preauth] Jan 17 12:19:24.033714 systemd[1]: sshd@10-10.128.0.67:22-51.178.141.222:56360.service: Deactivated successfully. Jan 17 12:19:24.325483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:24.339631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:24.374227 systemd[1]: Reloading requested from client PID 2191 ('systemctl') (unit session-7.scope)... Jan 17 12:19:24.374251 systemd[1]: Reloading... Jan 17 12:19:24.533178 zram_generator::config[2231]: No configuration found. Jan 17 12:19:24.685936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:24.787012 systemd[1]: Reloading finished in 412 ms. Jan 17 12:19:24.846062 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:19:24.846231 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:19:24.846645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:24.854712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:25.565113 systemd[1]: Started sshd@11-10.128.0.67:22-115.91.91.182:43664.service - OpenSSH per-connection server daemon (115.91.91.182:43664). Jan 17 12:19:25.833000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:25.848737 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:25.910294 kubelet[2285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:25.910294 kubelet[2285]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:25.910294 kubelet[2285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:25.910853 kubelet[2285]: I0117 12:19:25.910426 2285 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:26.469971 kubelet[2285]: I0117 12:19:26.469920 2285 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:19:26.469971 kubelet[2285]: I0117 12:19:26.469956 2285 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:26.470387 kubelet[2285]: I0117 12:19:26.470350 2285 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:19:26.501858 kubelet[2285]: E0117 12:19:26.501819 2285 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.505057 kubelet[2285]: I0117 12:19:26.504870 2285 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:26.523218 kubelet[2285]: I0117 12:19:26.522659 2285 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:26.525519 kubelet[2285]: I0117 12:19:26.525482 2285 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:26.525805 kubelet[2285]: I0117 12:19:26.525775 2285 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:19:26.527164 kubelet[2285]: I0117 12:19:26.526850 2285 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:26.527164 kubelet[2285]: I0117 12:19:26.526893 2285 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:19:26.527164 kubelet[2285]: I0117 12:19:26.527059 2285 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:26.527514 kubelet[2285]: I0117 12:19:26.527498 2285 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:19:26.527737 kubelet[2285]: I0117 12:19:26.527695 2285 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:26.527873 kubelet[2285]: I0117 12:19:26.527859 2285 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:19:26.527966 kubelet[2285]: I0117 12:19:26.527954 2285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:26.532019 kubelet[2285]: W0117 12:19:26.531258 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.532019 kubelet[2285]: E0117 12:19:26.531334 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.532019 kubelet[2285]: W0117 12:19:26.531786 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.532019 kubelet[2285]: E0117 12:19:26.531838 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.532516 kubelet[2285]: I0117 12:19:26.532492 2285 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:26.533158 sshd[2278]: Invalid user s from 115.91.91.182 port 43664 Jan 17 12:19:26.539910 kubelet[2285]: I0117 12:19:26.539319 2285 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:26.539910 kubelet[2285]: W0117 12:19:26.539420 2285 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:19:26.542332 kubelet[2285]: I0117 12:19:26.541875 2285 server.go:1256] "Started kubelet" Jan 17 12:19:26.542576 kubelet[2285]: I0117 12:19:26.542554 2285 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:26.544427 kubelet[2285]: I0117 12:19:26.543600 2285 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:19:26.546951 kubelet[2285]: I0117 12:19:26.546896 2285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:26.551094 kubelet[2285]: I0117 12:19:26.550279 2285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:26.551094 kubelet[2285]: I0117 12:19:26.550558 2285 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:26.553776 kubelet[2285]: E0117 12:19:26.553748 2285 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal.181b7a21b93e61de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,UID:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,},FirstTimestamp:2025-01-17 12:19:26.541840862 +0000 UTC m=+0.686791593,LastTimestamp:2025-01-17 12:19:26.541840862 +0000 UTC m=+0.686791593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,}" Jan 17 12:19:26.558943 kubelet[2285]: I0117 12:19:26.558540 2285 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:19:26.559962 kubelet[2285]: E0117 12:19:26.559916 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="200ms" Jan 17 12:19:26.560987 kubelet[2285]: I0117 12:19:26.560339 2285 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:19:26.560987 kubelet[2285]: W0117 12:19:26.560788 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.560987 kubelet[2285]: E0117 12:19:26.560848 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.560987 kubelet[2285]: I0117 12:19:26.560927 2285 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:19:26.562351 kubelet[2285]: I0117 12:19:26.562041 2285 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:26.562351 kubelet[2285]: I0117 12:19:26.562177 2285 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:26.566787 kubelet[2285]: I0117 12:19:26.565511 2285 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:26.582249 kubelet[2285]: E0117 12:19:26.582214 2285 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:26.591673 kubelet[2285]: I0117 12:19:26.591305 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:26.593706 kubelet[2285]: I0117 12:19:26.593678 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:26.593973 kubelet[2285]: I0117 12:19:26.593887 2285 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:26.593973 kubelet[2285]: I0117 12:19:26.593921 2285 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:19:26.594257 kubelet[2285]: E0117 12:19:26.593992 2285 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:19:26.595917 kubelet[2285]: W0117 12:19:26.595770 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.595917 kubelet[2285]: E0117 12:19:26.595839 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:26.598838 kubelet[2285]: I0117 12:19:26.598818 2285 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:26.599256 kubelet[2285]: I0117 12:19:26.599038 2285 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:26.599256 kubelet[2285]: I0117 12:19:26.599209 2285 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:26.664582 kubelet[2285]: I0117 12:19:26.664525 2285 policy_none.go:49] "None policy: Start" Jan 17 12:19:26.665723 kubelet[2285]: I0117 12:19:26.665645 2285 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:26.665723 kubelet[2285]: I0117 12:19:26.665683 2285 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:26.666452 kubelet[2285]: I0117 12:19:26.666406 2285 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.666909 kubelet[2285]: E0117 12:19:26.666886 2285 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.694160 kubelet[2285]: E0117 12:19:26.694085 2285 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:19:26.712261 sshd[2278]: Received disconnect from 115.91.91.182 port 43664:11: Bye Bye [preauth] Jan 17 12:19:26.712261 sshd[2278]: Disconnected from invalid user s 115.91.91.182 port 43664 [preauth] Jan 17 12:19:26.714008 systemd[1]: sshd@11-10.128.0.67:22-115.91.91.182:43664.service: Deactivated successfully. Jan 17 12:19:26.741659 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:19:26.751932 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:19:26.756901 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:19:26.764606 kubelet[2285]: E0117 12:19:26.764562 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="400ms" Jan 17 12:19:26.767754 kubelet[2285]: I0117 12:19:26.767222 2285 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:26.767754 kubelet[2285]: I0117 12:19:26.767612 2285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:26.770032 kubelet[2285]: E0117 12:19:26.770005 2285 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" not found" Jan 17 12:19:26.876010 kubelet[2285]: I0117 12:19:26.875966 2285 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.876414 kubelet[2285]: E0117 12:19:26.876375 2285 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.894586 kubelet[2285]: I0117 12:19:26.894523 2285 topology_manager.go:215] "Topology Admit Handler" podUID="4e9a19aa5e62d540356c08c5e11352bc" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.900487 kubelet[2285]: I0117 12:19:26.900441 2285 topology_manager.go:215] "Topology Admit Handler" podUID="b86f59fc95b6f2be8458f6233e4bbd44" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.919599 kubelet[2285]: I0117 12:19:26.919257 2285 topology_manager.go:215] "Topology Admit Handler" podUID="301ead69caf8b32bb1f72a14e0b126b0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.927736 systemd[1]: Created slice kubepods-burstable-pod4e9a19aa5e62d540356c08c5e11352bc.slice - libcontainer container kubepods-burstable-pod4e9a19aa5e62d540356c08c5e11352bc.slice. Jan 17 12:19:26.941401 systemd[1]: Created slice kubepods-burstable-podb86f59fc95b6f2be8458f6233e4bbd44.slice - libcontainer container kubepods-burstable-podb86f59fc95b6f2be8458f6233e4bbd44.slice. Jan 17 12:19:26.956578 systemd[1]: Created slice kubepods-burstable-pod301ead69caf8b32bb1f72a14e0b126b0.slice - libcontainer container kubepods-burstable-pod301ead69caf8b32bb1f72a14e0b126b0.slice. Jan 17 12:19:26.965464 kubelet[2285]: I0117 12:19:26.965430 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b86f59fc95b6f2be8458f6233e4bbd44-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"b86f59fc95b6f2be8458f6233e4bbd44\") " pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.965464 kubelet[2285]: I0117 12:19:26.965490 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.965464 kubelet[2285]: I0117 12:19:26.965527 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.965871 kubelet[2285]: I0117 12:19:26.965563 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b86f59fc95b6f2be8458f6233e4bbd44-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"b86f59fc95b6f2be8458f6233e4bbd44\") " pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.965871 kubelet[2285]: I0117 12:19:26.965602 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b86f59fc95b6f2be8458f6233e4bbd44-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"b86f59fc95b6f2be8458f6233e4bbd44\") " pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.965871 kubelet[2285]: I0117 12:19:26.965639 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.965871 kubelet[2285]: I0117 12:19:26.965694 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.966066 kubelet[2285]: I0117 12:19:26.965733 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:26.966066 kubelet[2285]: I0117 12:19:26.965767 2285 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e9a19aa5e62d540356c08c5e11352bc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"4e9a19aa5e62d540356c08c5e11352bc\") " pod="kube-system/kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:27.165879 kubelet[2285]: E0117 12:19:27.165832 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="800ms" Jan 17 12:19:27.239313 containerd[1463]: time="2025-01-17T12:19:27.239235112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,Uid:4e9a19aa5e62d540356c08c5e11352bc,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:27.254372 containerd[1463]: time="2025-01-17T12:19:27.254284410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,Uid:b86f59fc95b6f2be8458f6233e4bbd44,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:27.261236 containerd[1463]: time="2025-01-17T12:19:27.261182365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,Uid:301ead69caf8b32bb1f72a14e0b126b0,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:27.282211 kubelet[2285]: I0117 12:19:27.282175 2285 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:27.282828 kubelet[2285]: E0117 12:19:27.282798 2285 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:27.552220 kubelet[2285]: W0117 12:19:27.552116 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.552220 kubelet[2285]: E0117 12:19:27.552228 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.560014 kubelet[2285]: W0117 12:19:27.559832 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.560014 kubelet[2285]: E0117 12:19:27.560011 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.631487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413835072.mount: Deactivated successfully. Jan 17 12:19:27.639309 containerd[1463]: time="2025-01-17T12:19:27.639199403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:27.640677 containerd[1463]: time="2025-01-17T12:19:27.640613820Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:27.641936 containerd[1463]: time="2025-01-17T12:19:27.641871628Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 17 12:19:27.643064 containerd[1463]: time="2025-01-17T12:19:27.642971712Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:27.644872 containerd[1463]: time="2025-01-17T12:19:27.644828598Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:27.645743 containerd[1463]: time="2025-01-17T12:19:27.645600664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:27.647227 containerd[1463]: time="2025-01-17T12:19:27.647023597Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:19:27.648525 containerd[1463]: time="2025-01-17T12:19:27.648417772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:19:27.652320 containerd[1463]: time="2025-01-17T12:19:27.651622194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 390.342052ms" Jan 17 12:19:27.654268 containerd[1463]: time="2025-01-17T12:19:27.654215503Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 414.866771ms" Jan 17 12:19:27.664840 containerd[1463]: time="2025-01-17T12:19:27.664758764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.359137ms" Jan 17 12:19:27.782678 kubelet[2285]: W0117 12:19:27.782464 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.782678 kubelet[2285]: E0117 12:19:27.782547 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.67:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.791905 kubelet[2285]: W0117 12:19:27.791791 2285 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.791905 kubelet[2285]: E0117 12:19:27.791876 2285 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.67:6443: connect: connection refused Jan 17 12:19:27.863547 containerd[1463]: time="2025-01-17T12:19:27.862964623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:27.863547 containerd[1463]: time="2025-01-17T12:19:27.863044969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:27.863547 containerd[1463]: time="2025-01-17T12:19:27.863081761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:27.866522 containerd[1463]: time="2025-01-17T12:19:27.864813042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:27.868775 containerd[1463]: time="2025-01-17T12:19:27.864352947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:27.868775 containerd[1463]: time="2025-01-17T12:19:27.868540163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:27.869188 containerd[1463]: time="2025-01-17T12:19:27.868601237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:27.869188 containerd[1463]: time="2025-01-17T12:19:27.868621901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:27.869188 containerd[1463]: time="2025-01-17T12:19:27.868727042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:27.869394 containerd[1463]: time="2025-01-17T12:19:27.869073604Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:27.869394 containerd[1463]: time="2025-01-17T12:19:27.869099895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:27.870961 containerd[1463]: time="2025-01-17T12:19:27.869560709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:27.906396 systemd[1]: Started cri-containerd-da847c181240a1f07539149b2f6df0737eb878fa0930c0787d5ba36590ccc446.scope - libcontainer container da847c181240a1f07539149b2f6df0737eb878fa0930c0787d5ba36590ccc446. Jan 17 12:19:27.917501 systemd[1]: Started cri-containerd-ad9966dcd7b9e1b81565231d484b940390af88889f86d31d2efa48ab3b32873b.scope - libcontainer container ad9966dcd7b9e1b81565231d484b940390af88889f86d31d2efa48ab3b32873b. Jan 17 12:19:27.924980 systemd[1]: Started cri-containerd-8ca4f144c05ae93ba61b6130ab280a5579fa0d1ae74590079bfffc33863121fd.scope - libcontainer container 8ca4f144c05ae93ba61b6130ab280a5579fa0d1ae74590079bfffc33863121fd. Jan 17 12:19:27.967109 kubelet[2285]: E0117 12:19:27.967072 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.67:6443: connect: connection refused" interval="1.6s" Jan 17 12:19:28.033260 containerd[1463]: time="2025-01-17T12:19:28.033175132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,Uid:4e9a19aa5e62d540356c08c5e11352bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad9966dcd7b9e1b81565231d484b940390af88889f86d31d2efa48ab3b32873b\"" Jan 17 12:19:28.033839 containerd[1463]: time="2025-01-17T12:19:28.033805585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,Uid:301ead69caf8b32bb1f72a14e0b126b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"da847c181240a1f07539149b2f6df0737eb878fa0930c0787d5ba36590ccc446\"" Jan 17 12:19:28.038839 kubelet[2285]: E0117 12:19:28.038808 2285 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flat" Jan 17 12:19:28.039521 kubelet[2285]: E0117 12:19:28.039367 2285 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-21291" Jan 17 12:19:28.044172 containerd[1463]: time="2025-01-17T12:19:28.042587908Z" level=info msg="CreateContainer within sandbox \"ad9966dcd7b9e1b81565231d484b940390af88889f86d31d2efa48ab3b32873b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:19:28.044172 containerd[1463]: time="2025-01-17T12:19:28.043358511Z" level=info msg="CreateContainer within sandbox \"da847c181240a1f07539149b2f6df0737eb878fa0930c0787d5ba36590ccc446\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:19:28.054690 containerd[1463]: time="2025-01-17T12:19:28.054621740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal,Uid:b86f59fc95b6f2be8458f6233e4bbd44,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ca4f144c05ae93ba61b6130ab280a5579fa0d1ae74590079bfffc33863121fd\"" Jan 17 12:19:28.056461 kubelet[2285]: E0117 12:19:28.056417 2285 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-21291" Jan 17 12:19:28.058969 containerd[1463]: time="2025-01-17T12:19:28.058925738Z" level=info msg="CreateContainer within sandbox \"8ca4f144c05ae93ba61b6130ab280a5579fa0d1ae74590079bfffc33863121fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:19:28.072172 containerd[1463]: time="2025-01-17T12:19:28.071758071Z" level=info msg="CreateContainer within sandbox \"da847c181240a1f07539149b2f6df0737eb878fa0930c0787d5ba36590ccc446\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b068436a5967ced41eb35c5f53d36dc96984376ffeb57f9d1b2bbcea0292e93f\"" Jan 17 12:19:28.073543 containerd[1463]: time="2025-01-17T12:19:28.073508943Z" level=info msg="StartContainer for \"b068436a5967ced41eb35c5f53d36dc96984376ffeb57f9d1b2bbcea0292e93f\"" Jan 17 12:19:28.078093 containerd[1463]: time="2025-01-17T12:19:28.077945778Z" level=info msg="CreateContainer within sandbox \"ad9966dcd7b9e1b81565231d484b940390af88889f86d31d2efa48ab3b32873b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5523e96bbecb5168dd24b80e100ae80c69a1ccbf883af65c829033d02f9586ff\"" Jan 17 12:19:28.079051 containerd[1463]: time="2025-01-17T12:19:28.078863301Z" level=info msg="StartContainer for \"5523e96bbecb5168dd24b80e100ae80c69a1ccbf883af65c829033d02f9586ff\"" Jan 17 12:19:28.090011 kubelet[2285]: I0117 12:19:28.089546 2285 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:28.090011 kubelet[2285]: E0117 12:19:28.089978 2285 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.67:6443/api/v1/nodes\": dial tcp 10.128.0.67:6443: connect: connection refused" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:28.093343 containerd[1463]: time="2025-01-17T12:19:28.092590274Z" level=info msg="CreateContainer within sandbox \"8ca4f144c05ae93ba61b6130ab280a5579fa0d1ae74590079bfffc33863121fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0013a2522ba547e19f240af44626699364e16a55695090b0b6308bf0fa9fa11f\"" Jan 17 12:19:28.095212 containerd[1463]: time="2025-01-17T12:19:28.093717306Z" level=info msg="StartContainer for \"0013a2522ba547e19f240af44626699364e16a55695090b0b6308bf0fa9fa11f\"" Jan 17 12:19:28.136431 systemd[1]: Started cri-containerd-b068436a5967ced41eb35c5f53d36dc96984376ffeb57f9d1b2bbcea0292e93f.scope - libcontainer container b068436a5967ced41eb35c5f53d36dc96984376ffeb57f9d1b2bbcea0292e93f. Jan 17 12:19:28.150394 systemd[1]: Started cri-containerd-5523e96bbecb5168dd24b80e100ae80c69a1ccbf883af65c829033d02f9586ff.scope - libcontainer container 5523e96bbecb5168dd24b80e100ae80c69a1ccbf883af65c829033d02f9586ff. Jan 17 12:19:28.166827 systemd[1]: Started cri-containerd-0013a2522ba547e19f240af44626699364e16a55695090b0b6308bf0fa9fa11f.scope - libcontainer container 0013a2522ba547e19f240af44626699364e16a55695090b0b6308bf0fa9fa11f. Jan 17 12:19:28.277038 containerd[1463]: time="2025-01-17T12:19:28.276273956Z" level=info msg="StartContainer for \"b068436a5967ced41eb35c5f53d36dc96984376ffeb57f9d1b2bbcea0292e93f\" returns successfully" Jan 17 12:19:28.285421 containerd[1463]: time="2025-01-17T12:19:28.285353285Z" level=info msg="StartContainer for \"5523e96bbecb5168dd24b80e100ae80c69a1ccbf883af65c829033d02f9586ff\" returns successfully" Jan 17 12:19:28.302160 containerd[1463]: time="2025-01-17T12:19:28.301919610Z" level=info msg="StartContainer for \"0013a2522ba547e19f240af44626699364e16a55695090b0b6308bf0fa9fa11f\" returns successfully" Jan 17 12:19:29.002123 update_engine[1449]: I20250117 12:19:29.001178 1449 update_attempter.cc:509] Updating boot flags... Jan 17 12:19:29.106306 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2571) Jan 17 12:19:29.311178 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2575) Jan 17 12:19:29.699099 kubelet[2285]: I0117 12:19:29.698343 2285 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:31.190894 kubelet[2285]: E0117 12:19:31.190698 2285 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:31.292744 kubelet[2285]: I0117 12:19:31.292411 2285 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:31.533463 kubelet[2285]: I0117 12:19:31.533390 2285 apiserver.go:52] "Watching apiserver" Jan 17 12:19:31.561575 kubelet[2285]: I0117 12:19:31.561526 2285 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:19:32.313472 kubelet[2285]: W0117 12:19:32.313429 2285 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:19:32.614581 kubelet[2285]: W0117 12:19:32.614175 2285 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:19:33.996271 systemd[1]: Reloading requested from client PID 2582 ('systemctl') (unit session-7.scope)... Jan 17 12:19:33.996295 systemd[1]: Reloading... Jan 17 12:19:34.132172 zram_generator::config[2623]: No configuration found. Jan 17 12:19:34.282924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:19:34.411012 systemd[1]: Reloading finished in 413 ms. Jan 17 12:19:34.474025 kubelet[2285]: I0117 12:19:34.473936 2285 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:34.474079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:34.481683 systemd[1]: Started sshd@12-10.128.0.67:22-85.190.243.197:34888.service - OpenSSH per-connection server daemon (85.190.243.197:34888). Jan 17 12:19:34.493583 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:19:34.493937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:34.494023 systemd[1]: kubelet.service: Consumed 1.220s CPU time, 113.1M memory peak, 0B memory swap peak. Jan 17 12:19:34.503675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:19:34.736639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:19:34.753817 (kubelet)[2673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:19:34.845177 kubelet[2673]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:34.845177 kubelet[2673]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:19:34.845177 kubelet[2673]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:19:34.845177 kubelet[2673]: I0117 12:19:34.844759 2673 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:19:34.855282 kubelet[2673]: I0117 12:19:34.853543 2673 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:19:34.855282 kubelet[2673]: I0117 12:19:34.853578 2673 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:19:34.855282 kubelet[2673]: I0117 12:19:34.853946 2673 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:19:34.857015 kubelet[2673]: I0117 12:19:34.856964 2673 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:19:34.860336 kubelet[2673]: I0117 12:19:34.860106 2673 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:19:34.873753 kubelet[2673]: I0117 12:19:34.872695 2673 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:19:34.873753 kubelet[2673]: I0117 12:19:34.873052 2673 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:19:34.873753 kubelet[2673]: I0117 12:19:34.873402 2673 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:19:34.873753 kubelet[2673]: I0117 12:19:34.873453 2673 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:19:34.873753 kubelet[2673]: I0117 12:19:34.873477 2673 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:19:34.873753 kubelet[2673]: I0117 12:19:34.873540 2673 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:34.874295 kubelet[2673]: I0117 12:19:34.873748 2673 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:19:34.874295 kubelet[2673]: I0117 12:19:34.873776 2673 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:19:34.874295 kubelet[2673]: I0117 12:19:34.873857 2673 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:19:34.874295 kubelet[2673]: I0117 12:19:34.873877 2673 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:19:34.878876 kubelet[2673]: I0117 12:19:34.877302 2673 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:19:34.878876 kubelet[2673]: I0117 12:19:34.877687 2673 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:19:34.878876 kubelet[2673]: I0117 12:19:34.878841 2673 server.go:1256] "Started kubelet" Jan 17 12:19:34.887352 kubelet[2673]: I0117 12:19:34.887081 2673 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:19:34.895943 kubelet[2673]: I0117 12:19:34.892919 2673 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:19:34.900749 kubelet[2673]: I0117 12:19:34.899882 2673 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:19:34.905004 kubelet[2673]: I0117 12:19:34.904970 2673 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:19:34.905296 kubelet[2673]: I0117 12:19:34.905268 2673 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:19:34.909266 kubelet[2673]: I0117 12:19:34.909241 2673 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:19:34.922107 kubelet[2673]: I0117 12:19:34.922069 2673 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:19:34.923497 kubelet[2673]: I0117 12:19:34.923468 2673 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:19:34.930679 kubelet[2673]: I0117 12:19:34.930642 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:19:34.935222 kubelet[2673]: I0117 12:19:34.934882 2673 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:19:34.935222 kubelet[2673]: I0117 12:19:34.934936 2673 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:19:34.935222 kubelet[2673]: I0117 12:19:34.934965 2673 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:19:34.935222 kubelet[2673]: E0117 12:19:34.935043 2673 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:19:34.939931 kubelet[2673]: I0117 12:19:34.938832 2673 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:19:34.940321 kubelet[2673]: I0117 12:19:34.940183 2673 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:19:34.956994 kubelet[2673]: I0117 12:19:34.955540 2673 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:19:34.965944 kubelet[2673]: E0117 12:19:34.965714 2673 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:19:35.016553 kubelet[2673]: I0117 12:19:35.016395 2673 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.030699 kubelet[2673]: I0117 12:19:35.030657 2673 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:19:35.030699 kubelet[2673]: I0117 12:19:35.030699 2673 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:19:35.030930 kubelet[2673]: I0117 12:19:35.030725 2673 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:19:35.031031 kubelet[2673]: I0117 12:19:35.031013 2673 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:19:35.031160 kubelet[2673]: I0117 12:19:35.031058 2673 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:19:35.031160 kubelet[2673]: I0117 12:19:35.031072 2673 policy_none.go:49] "None policy: Start" Jan 17 12:19:35.037002 kubelet[2673]: E0117 12:19:35.036902 2673 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:19:35.038532 kubelet[2673]: I0117 12:19:35.037697 2673 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:19:35.038532 kubelet[2673]: I0117 12:19:35.037741 2673 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:19:35.038532 kubelet[2673]: I0117 12:19:35.038116 2673 state_mem.go:75] "Updated machine memory state" Jan 17 12:19:35.048535 kubelet[2673]: I0117 12:19:35.047505 2673 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.048535 kubelet[2673]: I0117 12:19:35.047608 2673 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.056687 kubelet[2673]: I0117 12:19:35.056658 2673 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:19:35.060201 kubelet[2673]: I0117 12:19:35.057608 2673 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:19:35.237461 kubelet[2673]: I0117 12:19:35.237276 2673 topology_manager.go:215] "Topology Admit Handler" podUID="b86f59fc95b6f2be8458f6233e4bbd44" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.237461 kubelet[2673]: I0117 12:19:35.237408 2673 topology_manager.go:215] "Topology Admit Handler" podUID="301ead69caf8b32bb1f72a14e0b126b0" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.237461 kubelet[2673]: I0117 12:19:35.237461 2673 topology_manager.go:215] "Topology Admit Handler" podUID="4e9a19aa5e62d540356c08c5e11352bc" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.247092 kubelet[2673]: W0117 12:19:35.246220 2673 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:19:35.248043 kubelet[2673]: W0117 12:19:35.247526 2673 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:19:35.248043 kubelet[2673]: E0117 12:19:35.247611 2673 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.248411 kubelet[2673]: W0117 12:19:35.248382 2673 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:19:35.248521 kubelet[2673]: E0117 12:19:35.248457 2673 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327191 kubelet[2673]: I0117 12:19:35.326606 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b86f59fc95b6f2be8458f6233e4bbd44-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"b86f59fc95b6f2be8458f6233e4bbd44\") " pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327191 kubelet[2673]: I0117 12:19:35.326694 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b86f59fc95b6f2be8458f6233e4bbd44-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"b86f59fc95b6f2be8458f6233e4bbd44\") " pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327191 kubelet[2673]: I0117 12:19:35.326754 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327191 kubelet[2673]: I0117 12:19:35.326801 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327571 kubelet[2673]: I0117 12:19:35.326848 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4e9a19aa5e62d540356c08c5e11352bc-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"4e9a19aa5e62d540356c08c5e11352bc\") " pod="kube-system/kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327571 kubelet[2673]: I0117 12:19:35.326886 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b86f59fc95b6f2be8458f6233e4bbd44-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"b86f59fc95b6f2be8458f6233e4bbd44\") " pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327571 kubelet[2673]: I0117 12:19:35.326924 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327571 kubelet[2673]: I0117 12:19:35.326966 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.327812 kubelet[2673]: I0117 12:19:35.327012 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/301ead69caf8b32bb1f72a14e0b126b0-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" (UID: \"301ead69caf8b32bb1f72a14e0b126b0\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:35.887326 kubelet[2673]: I0117 12:19:35.887012 2673 apiserver.go:52] "Watching apiserver" Jan 17 12:19:35.924479 kubelet[2673]: I0117 12:19:35.924372 2673 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:19:36.015500 kubelet[2673]: W0117 12:19:36.015460 2673 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 17 12:19:36.016644 kubelet[2673]: E0117 12:19:36.015806 2673 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:19:36.085171 kubelet[2673]: I0117 12:19:36.085107 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" podStartSLOduration=1.08500908 podStartE2EDuration="1.08500908s" podCreationTimestamp="2025-01-17 12:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:36.063811237 +0000 UTC m=+1.302801422" watchObservedRunningTime="2025-01-17 12:19:36.08500908 +0000 UTC m=+1.323999261" Jan 17 12:19:36.105535 kubelet[2673]: I0117 12:19:36.105050 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" podStartSLOduration=4.104981441 podStartE2EDuration="4.104981441s" podCreationTimestamp="2025-01-17 12:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:36.086088488 +0000 UTC m=+1.325078672" watchObservedRunningTime="2025-01-17 12:19:36.104981441 +0000 UTC m=+1.343971626" Jan 17 12:19:36.105535 kubelet[2673]: I0117 12:19:36.105206 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" podStartSLOduration=4.10517507 podStartE2EDuration="4.10517507s" podCreationTimestamp="2025-01-17 12:19:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:36.100957538 +0000 UTC m=+1.339947723" watchObservedRunningTime="2025-01-17 12:19:36.10517507 +0000 UTC m=+1.344165255" Jan 17 12:19:40.427185 sshd[2662]: Connection closed by 85.190.243.197 port 34888 [preauth] Jan 17 12:19:40.428823 systemd[1]: sshd@12-10.128.0.67:22-85.190.243.197:34888.service: Deactivated successfully. Jan 17 12:19:40.523985 sudo[1740]: pam_unix(sudo:session): session closed for user root Jan 17 12:19:40.568299 sshd[1737]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:40.576283 systemd[1]: sshd@9-10.128.0.67:22-139.178.89.65:58222.service: Deactivated successfully. Jan 17 12:19:40.579350 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:19:40.579748 systemd[1]: session-7.scope: Consumed 6.962s CPU time, 191.4M memory peak, 0B memory swap peak. Jan 17 12:19:40.582091 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:19:40.583867 systemd-logind[1441]: Removed session 7. Jan 17 12:19:49.874176 kubelet[2673]: I0117 12:19:49.874097 2673 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:19:49.874784 containerd[1463]: time="2025-01-17T12:19:49.874624733Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:19:49.875260 kubelet[2673]: I0117 12:19:49.874905 2673 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:19:50.180661 systemd[1]: Started sshd@13-10.128.0.67:22-51.178.141.222:37592.service - OpenSSH per-connection server daemon (51.178.141.222:37592). Jan 17 12:19:50.826184 sshd[2753]: Invalid user j from 51.178.141.222 port 37592 Jan 17 12:19:50.833578 kubelet[2673]: I0117 12:19:50.833515 2673 topology_manager.go:215] "Topology Admit Handler" podUID="3053dc65-d193-49e7-8b44-ddf9f8448655" podNamespace="kube-system" podName="kube-proxy-vfj2w" Jan 17 12:19:50.851673 systemd[1]: Created slice kubepods-besteffort-pod3053dc65_d193_49e7_8b44_ddf9f8448655.slice - libcontainer container kubepods-besteffort-pod3053dc65_d193_49e7_8b44_ddf9f8448655.slice. Jan 17 12:19:50.930363 kubelet[2673]: I0117 12:19:50.930310 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66gjj\" (UniqueName: \"kubernetes.io/projected/3053dc65-d193-49e7-8b44-ddf9f8448655-kube-api-access-66gjj\") pod \"kube-proxy-vfj2w\" (UID: \"3053dc65-d193-49e7-8b44-ddf9f8448655\") " pod="kube-system/kube-proxy-vfj2w" Jan 17 12:19:50.930945 kubelet[2673]: I0117 12:19:50.930444 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3053dc65-d193-49e7-8b44-ddf9f8448655-xtables-lock\") pod \"kube-proxy-vfj2w\" (UID: \"3053dc65-d193-49e7-8b44-ddf9f8448655\") " pod="kube-system/kube-proxy-vfj2w" Jan 17 12:19:50.930945 kubelet[2673]: I0117 12:19:50.930513 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3053dc65-d193-49e7-8b44-ddf9f8448655-kube-proxy\") pod \"kube-proxy-vfj2w\" (UID: \"3053dc65-d193-49e7-8b44-ddf9f8448655\") " pod="kube-system/kube-proxy-vfj2w" Jan 17 12:19:50.930945 kubelet[2673]: I0117 12:19:50.930553 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3053dc65-d193-49e7-8b44-ddf9f8448655-lib-modules\") pod \"kube-proxy-vfj2w\" (UID: \"3053dc65-d193-49e7-8b44-ddf9f8448655\") " pod="kube-system/kube-proxy-vfj2w" Jan 17 12:19:50.948726 sshd[2753]: Received disconnect from 51.178.141.222 port 37592:11: Bye Bye [preauth] Jan 17 12:19:50.948726 sshd[2753]: Disconnected from invalid user j 51.178.141.222 port 37592 [preauth] Jan 17 12:19:50.952034 systemd[1]: sshd@13-10.128.0.67:22-51.178.141.222:37592.service: Deactivated successfully. Jan 17 12:19:51.006645 kubelet[2673]: I0117 12:19:51.006588 2673 topology_manager.go:215] "Topology Admit Handler" podUID="eb1a5c3d-aea8-4d9a-846c-3d2b1d8a979a" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-sbq7x" Jan 17 12:19:51.020256 systemd[1]: Created slice kubepods-besteffort-podeb1a5c3d_aea8_4d9a_846c_3d2b1d8a979a.slice - libcontainer container kubepods-besteffort-podeb1a5c3d_aea8_4d9a_846c_3d2b1d8a979a.slice. Jan 17 12:19:51.033074 kubelet[2673]: I0117 12:19:51.031423 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxfc6\" (UniqueName: \"kubernetes.io/projected/eb1a5c3d-aea8-4d9a-846c-3d2b1d8a979a-kube-api-access-nxfc6\") pod \"tigera-operator-c7ccbd65-sbq7x\" (UID: \"eb1a5c3d-aea8-4d9a-846c-3d2b1d8a979a\") " pod="tigera-operator/tigera-operator-c7ccbd65-sbq7x" Jan 17 12:19:51.033074 kubelet[2673]: I0117 12:19:51.031522 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb1a5c3d-aea8-4d9a-846c-3d2b1d8a979a-var-lib-calico\") pod \"tigera-operator-c7ccbd65-sbq7x\" (UID: \"eb1a5c3d-aea8-4d9a-846c-3d2b1d8a979a\") " pod="tigera-operator/tigera-operator-c7ccbd65-sbq7x" Jan 17 12:19:51.162919 containerd[1463]: time="2025-01-17T12:19:51.162331074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfj2w,Uid:3053dc65-d193-49e7-8b44-ddf9f8448655,Namespace:kube-system,Attempt:0,}" Jan 17 12:19:51.201466 containerd[1463]: time="2025-01-17T12:19:51.201255572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:51.201466 containerd[1463]: time="2025-01-17T12:19:51.201386650Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:51.201466 containerd[1463]: time="2025-01-17T12:19:51.201428449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:51.202813 containerd[1463]: time="2025-01-17T12:19:51.202594413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:51.239425 systemd[1]: Started cri-containerd-38f21c8279005fb2d559c09a0000f53d761da60a069284085a4e75f572fa5c7e.scope - libcontainer container 38f21c8279005fb2d559c09a0000f53d761da60a069284085a4e75f572fa5c7e. Jan 17 12:19:51.274567 containerd[1463]: time="2025-01-17T12:19:51.274490645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vfj2w,Uid:3053dc65-d193-49e7-8b44-ddf9f8448655,Namespace:kube-system,Attempt:0,} returns sandbox id \"38f21c8279005fb2d559c09a0000f53d761da60a069284085a4e75f572fa5c7e\"" Jan 17 12:19:51.279865 containerd[1463]: time="2025-01-17T12:19:51.279701948Z" level=info msg="CreateContainer within sandbox \"38f21c8279005fb2d559c09a0000f53d761da60a069284085a4e75f572fa5c7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:19:51.299909 containerd[1463]: time="2025-01-17T12:19:51.299847460Z" level=info msg="CreateContainer within sandbox \"38f21c8279005fb2d559c09a0000f53d761da60a069284085a4e75f572fa5c7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7681ebbadd2dc4caf1b4db6ec5de8b12d6644e4c89d7ba1a7eb005987255b3d5\"" Jan 17 12:19:51.302951 containerd[1463]: time="2025-01-17T12:19:51.301814019Z" level=info msg="StartContainer for \"7681ebbadd2dc4caf1b4db6ec5de8b12d6644e4c89d7ba1a7eb005987255b3d5\"" Jan 17 12:19:51.328654 containerd[1463]: time="2025-01-17T12:19:51.328429690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-sbq7x,Uid:eb1a5c3d-aea8-4d9a-846c-3d2b1d8a979a,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:19:51.347505 systemd[1]: Started cri-containerd-7681ebbadd2dc4caf1b4db6ec5de8b12d6644e4c89d7ba1a7eb005987255b3d5.scope - libcontainer container 7681ebbadd2dc4caf1b4db6ec5de8b12d6644e4c89d7ba1a7eb005987255b3d5. Jan 17 12:19:51.372177 containerd[1463]: time="2025-01-17T12:19:51.371685962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:19:51.372177 containerd[1463]: time="2025-01-17T12:19:51.371818482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:19:51.372177 containerd[1463]: time="2025-01-17T12:19:51.371848823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:51.372177 containerd[1463]: time="2025-01-17T12:19:51.372009155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:19:51.413637 systemd[1]: Started cri-containerd-726f84d985320331330a070ff72b8c901c972d194b2c3f8c336c6a5382a870fc.scope - libcontainer container 726f84d985320331330a070ff72b8c901c972d194b2c3f8c336c6a5382a870fc. Jan 17 12:19:51.428867 containerd[1463]: time="2025-01-17T12:19:51.428805237Z" level=info msg="StartContainer for \"7681ebbadd2dc4caf1b4db6ec5de8b12d6644e4c89d7ba1a7eb005987255b3d5\" returns successfully" Jan 17 12:19:51.504730 containerd[1463]: time="2025-01-17T12:19:51.504611168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-sbq7x,Uid:eb1a5c3d-aea8-4d9a-846c-3d2b1d8a979a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"726f84d985320331330a070ff72b8c901c972d194b2c3f8c336c6a5382a870fc\"" Jan 17 12:19:51.510397 containerd[1463]: time="2025-01-17T12:19:51.510348635Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:19:52.035697 kubelet[2673]: I0117 12:19:52.035648 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vfj2w" podStartSLOduration=2.035587959 podStartE2EDuration="2.035587959s" podCreationTimestamp="2025-01-17 12:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:19:52.035281514 +0000 UTC m=+17.274271699" watchObservedRunningTime="2025-01-17 12:19:52.035587959 +0000 UTC m=+17.274578144" Jan 17 12:19:56.126473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849276973.mount: Deactivated successfully. Jan 17 12:19:56.854583 containerd[1463]: time="2025-01-17T12:19:56.854516569Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:56.855998 containerd[1463]: time="2025-01-17T12:19:56.855928895Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764301" Jan 17 12:19:56.857602 containerd[1463]: time="2025-01-17T12:19:56.857496333Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:56.860861 containerd[1463]: time="2025-01-17T12:19:56.860776637Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:19:56.862656 containerd[1463]: time="2025-01-17T12:19:56.861829753Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.351423152s" Jan 17 12:19:56.862656 containerd[1463]: time="2025-01-17T12:19:56.861882317Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:19:56.864841 containerd[1463]: time="2025-01-17T12:19:56.864649307Z" level=info msg="CreateContainer within sandbox \"726f84d985320331330a070ff72b8c901c972d194b2c3f8c336c6a5382a870fc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:19:56.884003 containerd[1463]: time="2025-01-17T12:19:56.883937469Z" level=info msg="CreateContainer within sandbox \"726f84d985320331330a070ff72b8c901c972d194b2c3f8c336c6a5382a870fc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7cd2f29609b70e71abf5296a0da5a8c91ebca2f4ec02e4acc360d2f5d0373321\"" Jan 17 12:19:56.885002 containerd[1463]: time="2025-01-17T12:19:56.884962540Z" level=info msg="StartContainer for \"7cd2f29609b70e71abf5296a0da5a8c91ebca2f4ec02e4acc360d2f5d0373321\"" Jan 17 12:19:56.929483 systemd[1]: Started cri-containerd-7cd2f29609b70e71abf5296a0da5a8c91ebca2f4ec02e4acc360d2f5d0373321.scope - libcontainer container 7cd2f29609b70e71abf5296a0da5a8c91ebca2f4ec02e4acc360d2f5d0373321. Jan 17 12:19:56.968807 containerd[1463]: time="2025-01-17T12:19:56.968746243Z" level=info msg="StartContainer for \"7cd2f29609b70e71abf5296a0da5a8c91ebca2f4ec02e4acc360d2f5d0373321\" returns successfully" Jan 17 12:19:57.705561 systemd[1]: Started sshd@14-10.128.0.67:22-115.91.91.182:53246.service - OpenSSH per-connection server daemon (115.91.91.182:53246). Jan 17 12:19:58.842193 sshd[3044]: Received disconnect from 115.91.91.182 port 53246:11: Bye Bye [preauth] Jan 17 12:19:58.842193 sshd[3044]: Disconnected from authenticating user root 115.91.91.182 port 53246 [preauth] Jan 17 12:19:58.845411 systemd[1]: sshd@14-10.128.0.67:22-115.91.91.182:53246.service: Deactivated successfully. Jan 17 12:20:00.350002 kubelet[2673]: I0117 12:20:00.349915 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-sbq7x" podStartSLOduration=4.994727057 podStartE2EDuration="10.349853557s" podCreationTimestamp="2025-01-17 12:19:50 +0000 UTC" firstStartedPulling="2025-01-17 12:19:51.507253114 +0000 UTC m=+16.746243284" lastFinishedPulling="2025-01-17 12:19:56.862379607 +0000 UTC m=+22.101369784" observedRunningTime="2025-01-17 12:19:57.0487311 +0000 UTC m=+22.287721284" watchObservedRunningTime="2025-01-17 12:20:00.349853557 +0000 UTC m=+25.588843742" Jan 17 12:20:00.351400 kubelet[2673]: I0117 12:20:00.350308 2673 topology_manager.go:215] "Topology Admit Handler" podUID="b1b61429-4da5-4860-8f56-17bccaf49ac9" podNamespace="calico-system" podName="calico-typha-684b9c598c-6qn6p" Jan 17 12:20:00.369341 systemd[1]: Created slice kubepods-besteffort-podb1b61429_4da5_4860_8f56_17bccaf49ac9.slice - libcontainer container kubepods-besteffort-podb1b61429_4da5_4860_8f56_17bccaf49ac9.slice. Jan 17 12:20:00.404870 kubelet[2673]: I0117 12:20:00.404817 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdh8\" (UniqueName: \"kubernetes.io/projected/b1b61429-4da5-4860-8f56-17bccaf49ac9-kube-api-access-rqdh8\") pod \"calico-typha-684b9c598c-6qn6p\" (UID: \"b1b61429-4da5-4860-8f56-17bccaf49ac9\") " pod="calico-system/calico-typha-684b9c598c-6qn6p" Jan 17 12:20:00.404870 kubelet[2673]: I0117 12:20:00.404890 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b61429-4da5-4860-8f56-17bccaf49ac9-tigera-ca-bundle\") pod \"calico-typha-684b9c598c-6qn6p\" (UID: \"b1b61429-4da5-4860-8f56-17bccaf49ac9\") " pod="calico-system/calico-typha-684b9c598c-6qn6p" Jan 17 12:20:00.405235 kubelet[2673]: I0117 12:20:00.404922 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b1b61429-4da5-4860-8f56-17bccaf49ac9-typha-certs\") pod \"calico-typha-684b9c598c-6qn6p\" (UID: \"b1b61429-4da5-4860-8f56-17bccaf49ac9\") " pod="calico-system/calico-typha-684b9c598c-6qn6p" Jan 17 12:20:00.603705 kubelet[2673]: I0117 12:20:00.602736 2673 topology_manager.go:215] "Topology Admit Handler" podUID="dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8" podNamespace="calico-system" podName="calico-node-hgmf4" Jan 17 12:20:00.617203 systemd[1]: Created slice kubepods-besteffort-poddc4c0e34_bcdb_4ad0_9c5d_d7fe076f1cd8.slice - libcontainer container kubepods-besteffort-poddc4c0e34_bcdb_4ad0_9c5d_d7fe076f1cd8.slice. Jan 17 12:20:00.676621 containerd[1463]: time="2025-01-17T12:20:00.676569310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-684b9c598c-6qn6p,Uid:b1b61429-4da5-4860-8f56-17bccaf49ac9,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:00.709425 kubelet[2673]: I0117 12:20:00.708597 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-lib-modules\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.709425 kubelet[2673]: I0117 12:20:00.708663 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-node-certs\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.709425 kubelet[2673]: I0117 12:20:00.708699 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-var-lib-calico\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.709425 kubelet[2673]: I0117 12:20:00.708738 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-var-run-calico\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.709425 kubelet[2673]: I0117 12:20:00.708778 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxgz\" (UniqueName: \"kubernetes.io/projected/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-kube-api-access-6bxgz\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.710023 kubelet[2673]: I0117 12:20:00.708815 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-policysync\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.710023 kubelet[2673]: I0117 12:20:00.708850 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-cni-bin-dir\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.710023 kubelet[2673]: I0117 12:20:00.708896 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-flexvol-driver-host\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.710023 kubelet[2673]: I0117 12:20:00.708931 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-cni-net-dir\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.710023 kubelet[2673]: I0117 12:20:00.708969 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-xtables-lock\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.711603 kubelet[2673]: I0117 12:20:00.709005 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-tigera-ca-bundle\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.711603 kubelet[2673]: I0117 12:20:00.711126 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8-cni-log-dir\") pod \"calico-node-hgmf4\" (UID: \"dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8\") " pod="calico-system/calico-node-hgmf4" Jan 17 12:20:00.739447 containerd[1463]: time="2025-01-17T12:20:00.737837660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:00.739447 containerd[1463]: time="2025-01-17T12:20:00.739208694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:00.739447 containerd[1463]: time="2025-01-17T12:20:00.739299313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:00.741039 containerd[1463]: time="2025-01-17T12:20:00.739640396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:00.775007 kubelet[2673]: I0117 12:20:00.774102 2673 topology_manager.go:215] "Topology Admit Handler" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" podNamespace="calico-system" podName="csi-node-driver-4jsqk" Jan 17 12:20:00.777782 kubelet[2673]: E0117 12:20:00.777749 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:00.803375 systemd[1]: Started cri-containerd-c71a6295b67896100c96cc17f117ee5f4e9d23de319fe10f8b04c03e5f61ed9d.scope - libcontainer container c71a6295b67896100c96cc17f117ee5f4e9d23de319fe10f8b04c03e5f61ed9d. Jan 17 12:20:00.813117 kubelet[2673]: I0117 12:20:00.812987 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc2rz\" (UniqueName: \"kubernetes.io/projected/c39613df-5b01-4ed2-aed8-82b1b3948bbf-kube-api-access-qc2rz\") pod \"csi-node-driver-4jsqk\" (UID: \"c39613df-5b01-4ed2-aed8-82b1b3948bbf\") " pod="calico-system/csi-node-driver-4jsqk" Jan 17 12:20:00.815723 kubelet[2673]: I0117 12:20:00.815246 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c39613df-5b01-4ed2-aed8-82b1b3948bbf-varrun\") pod \"csi-node-driver-4jsqk\" (UID: \"c39613df-5b01-4ed2-aed8-82b1b3948bbf\") " pod="calico-system/csi-node-driver-4jsqk" Jan 17 12:20:00.815723 kubelet[2673]: I0117 12:20:00.815558 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c39613df-5b01-4ed2-aed8-82b1b3948bbf-socket-dir\") pod \"csi-node-driver-4jsqk\" (UID: \"c39613df-5b01-4ed2-aed8-82b1b3948bbf\") " pod="calico-system/csi-node-driver-4jsqk" Jan 17 12:20:00.819709 kubelet[2673]: I0117 12:20:00.816773 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c39613df-5b01-4ed2-aed8-82b1b3948bbf-registration-dir\") pod \"csi-node-driver-4jsqk\" (UID: \"c39613df-5b01-4ed2-aed8-82b1b3948bbf\") " pod="calico-system/csi-node-driver-4jsqk" Jan 17 12:20:00.819709 kubelet[2673]: I0117 12:20:00.816912 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c39613df-5b01-4ed2-aed8-82b1b3948bbf-kubelet-dir\") pod \"csi-node-driver-4jsqk\" (UID: \"c39613df-5b01-4ed2-aed8-82b1b3948bbf\") " pod="calico-system/csi-node-driver-4jsqk" Jan 17 12:20:00.822402 kubelet[2673]: E0117 12:20:00.822341 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.823234 kubelet[2673]: W0117 12:20:00.823200 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.823543 kubelet[2673]: E0117 12:20:00.823518 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.826502 kubelet[2673]: E0117 12:20:00.826467 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.826924 kubelet[2673]: W0117 12:20:00.826874 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.828480 kubelet[2673]: E0117 12:20:00.828450 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.829611 kubelet[2673]: E0117 12:20:00.829519 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.829611 kubelet[2673]: W0117 12:20:00.829543 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.830175 kubelet[2673]: E0117 12:20:00.829916 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.831852 kubelet[2673]: E0117 12:20:00.831685 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.831852 kubelet[2673]: W0117 12:20:00.831709 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.832434 kubelet[2673]: E0117 12:20:00.832259 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.834196 kubelet[2673]: E0117 12:20:00.832536 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.834594 kubelet[2673]: W0117 12:20:00.834330 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.834594 kubelet[2673]: E0117 12:20:00.834412 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.835220 kubelet[2673]: E0117 12:20:00.835028 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.835220 kubelet[2673]: W0117 12:20:00.835045 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.835220 kubelet[2673]: E0117 12:20:00.835117 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.835908 kubelet[2673]: E0117 12:20:00.835725 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.835908 kubelet[2673]: W0117 12:20:00.835759 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.835908 kubelet[2673]: E0117 12:20:00.835887 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.836928 kubelet[2673]: E0117 12:20:00.836708 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.836928 kubelet[2673]: W0117 12:20:00.836730 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.836928 kubelet[2673]: E0117 12:20:00.836891 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.838306 kubelet[2673]: E0117 12:20:00.837750 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.838306 kubelet[2673]: W0117 12:20:00.837772 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.838712 kubelet[2673]: E0117 12:20:00.838483 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.839767 kubelet[2673]: E0117 12:20:00.839656 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.839767 kubelet[2673]: W0117 12:20:00.839678 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.840672 kubelet[2673]: E0117 12:20:00.840211 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.842368 kubelet[2673]: E0117 12:20:00.842251 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.842368 kubelet[2673]: W0117 12:20:00.842293 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.843473 kubelet[2673]: E0117 12:20:00.843230 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.843473 kubelet[2673]: W0117 12:20:00.843249 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.843473 kubelet[2673]: E0117 12:20:00.843272 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.843473 kubelet[2673]: E0117 12:20:00.843298 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.844674 kubelet[2673]: E0117 12:20:00.844512 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.844674 kubelet[2673]: W0117 12:20:00.844532 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.846027 kubelet[2673]: E0117 12:20:00.845848 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.846027 kubelet[2673]: W0117 12:20:00.845868 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.846027 kubelet[2673]: E0117 12:20:00.845894 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.847690 kubelet[2673]: E0117 12:20:00.847458 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.848094 kubelet[2673]: E0117 12:20:00.847881 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.848385 kubelet[2673]: W0117 12:20:00.848181 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.848986 kubelet[2673]: E0117 12:20:00.848618 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.849858 kubelet[2673]: E0117 12:20:00.849678 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.849858 kubelet[2673]: W0117 12:20:00.849700 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.850726 kubelet[2673]: E0117 12:20:00.850347 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.850726 kubelet[2673]: E0117 12:20:00.850492 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.850726 kubelet[2673]: W0117 12:20:00.850507 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.851646 kubelet[2673]: E0117 12:20:00.851034 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.852418 kubelet[2673]: E0117 12:20:00.852270 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.852418 kubelet[2673]: W0117 12:20:00.852291 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.852841 kubelet[2673]: E0117 12:20:00.852637 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.856174 kubelet[2673]: E0117 12:20:00.854304 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.856569 kubelet[2673]: W0117 12:20:00.856229 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.858095 kubelet[2673]: E0117 12:20:00.857959 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.859683 kubelet[2673]: E0117 12:20:00.858990 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.859683 kubelet[2673]: W0117 12:20:00.859011 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.862181 kubelet[2673]: E0117 12:20:00.861854 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.862660 kubelet[2673]: E0117 12:20:00.862638 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.863760 kubelet[2673]: W0117 12:20:00.863696 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.865024 kubelet[2673]: E0117 12:20:00.864845 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.868720 kubelet[2673]: E0117 12:20:00.866299 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.868720 kubelet[2673]: W0117 12:20:00.866322 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.869354 kubelet[2673]: E0117 12:20:00.869328 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.872517 kubelet[2673]: E0117 12:20:00.872452 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.872517 kubelet[2673]: W0117 12:20:00.872514 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.873156 kubelet[2673]: E0117 12:20:00.872838 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.873283 kubelet[2673]: E0117 12:20:00.873264 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.873283 kubelet[2673]: W0117 12:20:00.873280 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.873880 kubelet[2673]: E0117 12:20:00.873568 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.874534 kubelet[2673]: E0117 12:20:00.874486 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.874534 kubelet[2673]: W0117 12:20:00.874529 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.874888 kubelet[2673]: E0117 12:20:00.874730 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.876491 kubelet[2673]: E0117 12:20:00.876450 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.876491 kubelet[2673]: W0117 12:20:00.876472 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.876667 kubelet[2673]: E0117 12:20:00.876578 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.877361 kubelet[2673]: E0117 12:20:00.877327 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.877361 kubelet[2673]: W0117 12:20:00.877351 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.879333 kubelet[2673]: E0117 12:20:00.879306 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.879333 kubelet[2673]: W0117 12:20:00.879332 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.879523 kubelet[2673]: E0117 12:20:00.879358 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.879523 kubelet[2673]: E0117 12:20:00.879409 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.887483 kubelet[2673]: E0117 12:20:00.887448 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.887483 kubelet[2673]: W0117 12:20:00.887479 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.887707 kubelet[2673]: E0117 12:20:00.887512 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.920162 kubelet[2673]: E0117 12:20:00.920094 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.920370 kubelet[2673]: W0117 12:20:00.920148 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.920370 kubelet[2673]: E0117 12:20:00.920212 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.921197 kubelet[2673]: E0117 12:20:00.921165 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.921197 kubelet[2673]: W0117 12:20:00.921192 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.921406 kubelet[2673]: E0117 12:20:00.921350 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.922055 kubelet[2673]: E0117 12:20:00.922012 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.922055 kubelet[2673]: W0117 12:20:00.922037 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.923311 kubelet[2673]: E0117 12:20:00.922071 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.924252 kubelet[2673]: E0117 12:20:00.924214 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.924252 kubelet[2673]: W0117 12:20:00.924240 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.924415 kubelet[2673]: E0117 12:20:00.924273 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.925083 containerd[1463]: time="2025-01-17T12:20:00.924922401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hgmf4,Uid:dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:00.926344 kubelet[2673]: E0117 12:20:00.926316 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.926344 kubelet[2673]: W0117 12:20:00.926344 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.927125 kubelet[2673]: E0117 12:20:00.926875 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.927125 kubelet[2673]: E0117 12:20:00.927056 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.927414 kubelet[2673]: W0117 12:20:00.927266 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.928929 kubelet[2673]: E0117 12:20:00.928220 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.928929 kubelet[2673]: W0117 12:20:00.928598 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.929345 kubelet[2673]: E0117 12:20:00.929289 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.930027 kubelet[2673]: W0117 12:20:00.929305 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.931319 kubelet[2673]: E0117 12:20:00.930072 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.931319 kubelet[2673]: E0117 12:20:00.930176 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.931319 kubelet[2673]: E0117 12:20:00.930199 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.931319 kubelet[2673]: E0117 12:20:00.930486 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.931319 kubelet[2673]: W0117 12:20:00.930708 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.931319 kubelet[2673]: E0117 12:20:00.930992 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.932325 kubelet[2673]: E0117 12:20:00.932215 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.932325 kubelet[2673]: W0117 12:20:00.932247 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.933324 kubelet[2673]: E0117 12:20:00.932420 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.933324 kubelet[2673]: E0117 12:20:00.932718 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.933324 kubelet[2673]: W0117 12:20:00.932734 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.933324 kubelet[2673]: E0117 12:20:00.933000 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.938223 kubelet[2673]: E0117 12:20:00.937323 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.938223 kubelet[2673]: W0117 12:20:00.937352 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.938223 kubelet[2673]: E0117 12:20:00.938006 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.938223 kubelet[2673]: W0117 12:20:00.938024 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.939302 kubelet[2673]: E0117 12:20:00.939210 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.939302 kubelet[2673]: E0117 12:20:00.939297 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.941345 kubelet[2673]: E0117 12:20:00.940112 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.941345 kubelet[2673]: W0117 12:20:00.940168 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.941958 kubelet[2673]: E0117 12:20:00.941665 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.941958 kubelet[2673]: W0117 12:20:00.941682 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.942688 kubelet[2673]: E0117 12:20:00.942613 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.942688 kubelet[2673]: E0117 12:20:00.942668 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.944495 kubelet[2673]: E0117 12:20:00.944397 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.944495 kubelet[2673]: W0117 12:20:00.944427 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.944913 kubelet[2673]: E0117 12:20:00.944887 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.944913 kubelet[2673]: W0117 12:20:00.944910 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.947521 kubelet[2673]: E0117 12:20:00.947478 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.947521 kubelet[2673]: W0117 12:20:00.947510 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.949847 kubelet[2673]: E0117 12:20:00.948400 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.949847 kubelet[2673]: W0117 12:20:00.948423 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.950406 kubelet[2673]: E0117 12:20:00.950376 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.950511 kubelet[2673]: E0117 12:20:00.950437 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.950608 kubelet[2673]: E0117 12:20:00.950581 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.950701 kubelet[2673]: E0117 12:20:00.950631 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.953275 kubelet[2673]: E0117 12:20:00.953246 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.953275 kubelet[2673]: W0117 12:20:00.953273 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.953495 kubelet[2673]: E0117 12:20:00.953302 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.954370 kubelet[2673]: E0117 12:20:00.954344 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.954370 kubelet[2673]: W0117 12:20:00.954369 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.955943 kubelet[2673]: E0117 12:20:00.955915 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.958667 kubelet[2673]: E0117 12:20:00.958629 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.958667 kubelet[2673]: W0117 12:20:00.958665 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.960314 kubelet[2673]: E0117 12:20:00.960282 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.960712 kubelet[2673]: E0117 12:20:00.960581 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.960712 kubelet[2673]: W0117 12:20:00.960604 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.964294 kubelet[2673]: E0117 12:20:00.964256 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.965191 kubelet[2673]: E0117 12:20:00.964789 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.965191 kubelet[2673]: W0117 12:20:00.964812 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.965191 kubelet[2673]: E0117 12:20:00.964840 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:00.969393 kubelet[2673]: E0117 12:20:00.969347 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:00.969393 kubelet[2673]: W0117 12:20:00.969385 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:00.969669 kubelet[2673]: E0117 12:20:00.969421 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:01.009651 containerd[1463]: time="2025-01-17T12:20:01.008906965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:01.009651 containerd[1463]: time="2025-01-17T12:20:01.009023183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:01.009651 containerd[1463]: time="2025-01-17T12:20:01.009051502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:01.009651 containerd[1463]: time="2025-01-17T12:20:01.009223539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:01.015123 kubelet[2673]: E0117 12:20:01.014337 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:01.015123 kubelet[2673]: W0117 12:20:01.014367 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:01.015123 kubelet[2673]: E0117 12:20:01.014398 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:01.061829 containerd[1463]: time="2025-01-17T12:20:01.061559537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-684b9c598c-6qn6p,Uid:b1b61429-4da5-4860-8f56-17bccaf49ac9,Namespace:calico-system,Attempt:0,} returns sandbox id \"c71a6295b67896100c96cc17f117ee5f4e9d23de319fe10f8b04c03e5f61ed9d\"" Jan 17 12:20:01.067598 containerd[1463]: time="2025-01-17T12:20:01.066529240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:20:01.067816 systemd[1]: Started cri-containerd-b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819.scope - libcontainer container b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819. Jan 17 12:20:01.120758 containerd[1463]: time="2025-01-17T12:20:01.117899537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hgmf4,Uid:dc4c0e34-bcdb-4ad0-9c5d-d7fe076f1cd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819\"" Jan 17 12:20:02.494729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount181574071.mount: Deactivated successfully. Jan 17 12:20:02.935584 kubelet[2673]: E0117 12:20:02.935476 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:03.384446 containerd[1463]: time="2025-01-17T12:20:03.384365342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:03.386470 containerd[1463]: time="2025-01-17T12:20:03.386373045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:20:03.389270 containerd[1463]: time="2025-01-17T12:20:03.389168789Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:03.395116 containerd[1463]: time="2025-01-17T12:20:03.395052509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:03.396461 containerd[1463]: time="2025-01-17T12:20:03.396091577Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.329511791s" Jan 17 12:20:03.396461 containerd[1463]: time="2025-01-17T12:20:03.396161846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:20:03.399453 containerd[1463]: time="2025-01-17T12:20:03.397880950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:20:03.419790 containerd[1463]: time="2025-01-17T12:20:03.418336891Z" level=info msg="CreateContainer within sandbox \"c71a6295b67896100c96cc17f117ee5f4e9d23de319fe10f8b04c03e5f61ed9d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:20:03.451185 containerd[1463]: time="2025-01-17T12:20:03.451093044Z" level=info msg="CreateContainer within sandbox \"c71a6295b67896100c96cc17f117ee5f4e9d23de319fe10f8b04c03e5f61ed9d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"486cec36d6dc62d3a2156ba0268b350baec988a65b3120a119f41c65764a9d36\"" Jan 17 12:20:03.452526 containerd[1463]: time="2025-01-17T12:20:03.452002030Z" level=info msg="StartContainer for \"486cec36d6dc62d3a2156ba0268b350baec988a65b3120a119f41c65764a9d36\"" Jan 17 12:20:03.497523 systemd[1]: Started cri-containerd-486cec36d6dc62d3a2156ba0268b350baec988a65b3120a119f41c65764a9d36.scope - libcontainer container 486cec36d6dc62d3a2156ba0268b350baec988a65b3120a119f41c65764a9d36. Jan 17 12:20:03.564108 containerd[1463]: time="2025-01-17T12:20:03.564039527Z" level=info msg="StartContainer for \"486cec36d6dc62d3a2156ba0268b350baec988a65b3120a119f41c65764a9d36\" returns successfully" Jan 17 12:20:04.118792 kubelet[2673]: E0117 12:20:04.118507 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.118792 kubelet[2673]: W0117 12:20:04.118556 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.118792 kubelet[2673]: E0117 12:20:04.118584 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.119577 kubelet[2673]: E0117 12:20:04.119008 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.119577 kubelet[2673]: W0117 12:20:04.119044 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.119577 kubelet[2673]: E0117 12:20:04.119068 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.119577 kubelet[2673]: E0117 12:20:04.119526 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.119577 kubelet[2673]: W0117 12:20:04.119541 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.119856 kubelet[2673]: E0117 12:20:04.119585 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.120390 kubelet[2673]: E0117 12:20:04.119933 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.120390 kubelet[2673]: W0117 12:20:04.119949 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.120390 kubelet[2673]: E0117 12:20:04.119968 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.120638 kubelet[2673]: E0117 12:20:04.120431 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.120638 kubelet[2673]: W0117 12:20:04.120484 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.120638 kubelet[2673]: E0117 12:20:04.120507 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.120885 kubelet[2673]: E0117 12:20:04.120862 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.120956 kubelet[2673]: W0117 12:20:04.120900 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.120956 kubelet[2673]: E0117 12:20:04.120922 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.121380 kubelet[2673]: E0117 12:20:04.121321 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.121380 kubelet[2673]: W0117 12:20:04.121341 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.121380 kubelet[2673]: E0117 12:20:04.121362 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.121703 kubelet[2673]: E0117 12:20:04.121680 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.121703 kubelet[2673]: W0117 12:20:04.121698 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.121912 kubelet[2673]: E0117 12:20:04.121718 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.122062 kubelet[2673]: E0117 12:20:04.122041 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.122062 kubelet[2673]: W0117 12:20:04.122062 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.122329 kubelet[2673]: E0117 12:20:04.122082 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.122437 kubelet[2673]: E0117 12:20:04.122399 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.122437 kubelet[2673]: W0117 12:20:04.122417 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.122437 kubelet[2673]: E0117 12:20:04.122437 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.122746 kubelet[2673]: E0117 12:20:04.122727 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.122746 kubelet[2673]: W0117 12:20:04.122744 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.123049 kubelet[2673]: E0117 12:20:04.122764 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.123237 kubelet[2673]: E0117 12:20:04.123058 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.123237 kubelet[2673]: W0117 12:20:04.123071 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.123237 kubelet[2673]: E0117 12:20:04.123092 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.123732 kubelet[2673]: E0117 12:20:04.123417 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.123732 kubelet[2673]: W0117 12:20:04.123430 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.123732 kubelet[2673]: E0117 12:20:04.123449 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.124301 kubelet[2673]: E0117 12:20:04.124161 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.124301 kubelet[2673]: W0117 12:20:04.124179 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.124301 kubelet[2673]: E0117 12:20:04.124206 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.124860 kubelet[2673]: E0117 12:20:04.124811 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.124860 kubelet[2673]: W0117 12:20:04.124848 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.124994 kubelet[2673]: E0117 12:20:04.124873 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.156619 kubelet[2673]: E0117 12:20:04.156171 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.156619 kubelet[2673]: W0117 12:20:04.156315 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.156619 kubelet[2673]: E0117 12:20:04.156355 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.157478 kubelet[2673]: E0117 12:20:04.157450 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.157478 kubelet[2673]: W0117 12:20:04.157473 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.157657 kubelet[2673]: E0117 12:20:04.157509 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.157955 kubelet[2673]: E0117 12:20:04.157932 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.157955 kubelet[2673]: W0117 12:20:04.157951 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.158093 kubelet[2673]: E0117 12:20:04.157981 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.158425 kubelet[2673]: E0117 12:20:04.158400 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.158425 kubelet[2673]: W0117 12:20:04.158421 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.158591 kubelet[2673]: E0117 12:20:04.158451 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.158815 kubelet[2673]: E0117 12:20:04.158793 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.158815 kubelet[2673]: W0117 12:20:04.158812 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.159010 kubelet[2673]: E0117 12:20:04.158903 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.159155 kubelet[2673]: E0117 12:20:04.159118 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.159299 kubelet[2673]: W0117 12:20:04.159166 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.159299 kubelet[2673]: E0117 12:20:04.159259 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.159541 kubelet[2673]: E0117 12:20:04.159509 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.159541 kubelet[2673]: W0117 12:20:04.159527 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.159755 kubelet[2673]: E0117 12:20:04.159623 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.159851 kubelet[2673]: E0117 12:20:04.159825 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.159851 kubelet[2673]: W0117 12:20:04.159837 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.159956 kubelet[2673]: E0117 12:20:04.159864 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.160325 kubelet[2673]: E0117 12:20:04.160263 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.160325 kubelet[2673]: W0117 12:20:04.160281 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.160325 kubelet[2673]: E0117 12:20:04.160310 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.160916 kubelet[2673]: E0117 12:20:04.160864 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.160916 kubelet[2673]: W0117 12:20:04.160884 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.161072 kubelet[2673]: E0117 12:20:04.161054 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.161550 kubelet[2673]: E0117 12:20:04.161396 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.161550 kubelet[2673]: W0117 12:20:04.161414 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.161550 kubelet[2673]: E0117 12:20:04.161452 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.161824 kubelet[2673]: E0117 12:20:04.161719 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.161824 kubelet[2673]: W0117 12:20:04.161733 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.161824 kubelet[2673]: E0117 12:20:04.161764 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.162079 kubelet[2673]: E0117 12:20:04.162060 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.162079 kubelet[2673]: W0117 12:20:04.162077 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.162245 kubelet[2673]: E0117 12:20:04.162105 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.162498 kubelet[2673]: E0117 12:20:04.162477 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.162498 kubelet[2673]: W0117 12:20:04.162495 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.162633 kubelet[2673]: E0117 12:20:04.162537 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.163335 kubelet[2673]: E0117 12:20:04.163312 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.163335 kubelet[2673]: W0117 12:20:04.163331 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.163481 kubelet[2673]: E0117 12:20:04.163370 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.163759 kubelet[2673]: E0117 12:20:04.163730 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.163759 kubelet[2673]: W0117 12:20:04.163749 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.163896 kubelet[2673]: E0117 12:20:04.163776 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.164384 kubelet[2673]: E0117 12:20:04.164362 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.164384 kubelet[2673]: W0117 12:20:04.164381 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.164583 kubelet[2673]: E0117 12:20:04.164408 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.164734 kubelet[2673]: E0117 12:20:04.164715 2673 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:20:04.164734 kubelet[2673]: W0117 12:20:04.164732 2673 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:20:04.164856 kubelet[2673]: E0117 12:20:04.164752 2673 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:20:04.821622 containerd[1463]: time="2025-01-17T12:20:04.821546794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:04.822956 containerd[1463]: time="2025-01-17T12:20:04.822894252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:20:04.825100 containerd[1463]: time="2025-01-17T12:20:04.825030638Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:04.830206 containerd[1463]: time="2025-01-17T12:20:04.830097719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:04.831268 containerd[1463]: time="2025-01-17T12:20:04.830946837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.433020044s" Jan 17 12:20:04.831268 containerd[1463]: time="2025-01-17T12:20:04.831000350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:20:04.834502 containerd[1463]: time="2025-01-17T12:20:04.834459029Z" level=info msg="CreateContainer within sandbox \"b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:20:04.861726 containerd[1463]: time="2025-01-17T12:20:04.861649630Z" level=info msg="CreateContainer within sandbox \"b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f\"" Jan 17 12:20:04.863342 containerd[1463]: time="2025-01-17T12:20:04.862470021Z" level=info msg="StartContainer for \"a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f\"" Jan 17 12:20:04.917460 systemd[1]: Started cri-containerd-a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f.scope - libcontainer container a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f. Jan 17 12:20:04.938171 kubelet[2673]: E0117 12:20:04.936051 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:04.976393 containerd[1463]: time="2025-01-17T12:20:04.976324795Z" level=info msg="StartContainer for \"a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f\" returns successfully" Jan 17 12:20:04.999112 systemd[1]: cri-containerd-a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f.scope: Deactivated successfully. Jan 17 12:20:05.043238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f-rootfs.mount: Deactivated successfully. Jan 17 12:20:05.068657 kubelet[2673]: I0117 12:20:05.068586 2673 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:20:05.093437 kubelet[2673]: I0117 12:20:05.093269 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-684b9c598c-6qn6p" podStartSLOduration=2.761523257 podStartE2EDuration="5.093013895s" podCreationTimestamp="2025-01-17 12:20:00 +0000 UTC" firstStartedPulling="2025-01-17 12:20:01.06565467 +0000 UTC m=+26.304644844" lastFinishedPulling="2025-01-17 12:20:03.397145305 +0000 UTC m=+28.636135482" observedRunningTime="2025-01-17 12:20:04.081463906 +0000 UTC m=+29.320454097" watchObservedRunningTime="2025-01-17 12:20:05.093013895 +0000 UTC m=+30.332004080" Jan 17 12:20:05.620847 containerd[1463]: time="2025-01-17T12:20:05.620754184Z" level=info msg="shim disconnected" id=a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f namespace=k8s.io Jan 17 12:20:05.620847 containerd[1463]: time="2025-01-17T12:20:05.620843467Z" level=warning msg="cleaning up after shim disconnected" id=a595e12e94919b6e66332f1bb56f835356a48d773f270ed1ab2b9521c29af31f namespace=k8s.io Jan 17 12:20:05.620847 containerd[1463]: time="2025-01-17T12:20:05.620857797Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:06.075936 containerd[1463]: time="2025-01-17T12:20:06.075722845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:20:06.936787 kubelet[2673]: E0117 12:20:06.936011 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:08.936716 kubelet[2673]: E0117 12:20:08.936614 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:10.718475 containerd[1463]: time="2025-01-17T12:20:10.718313442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:10.720028 containerd[1463]: time="2025-01-17T12:20:10.719951923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:20:10.721557 containerd[1463]: time="2025-01-17T12:20:10.721457747Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:10.725607 containerd[1463]: time="2025-01-17T12:20:10.725556543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:10.727223 containerd[1463]: time="2025-01-17T12:20:10.726818471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.651021589s" Jan 17 12:20:10.727223 containerd[1463]: time="2025-01-17T12:20:10.726884938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:20:10.730580 containerd[1463]: time="2025-01-17T12:20:10.730475655Z" level=info msg="CreateContainer within sandbox \"b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:20:10.755112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964441415.mount: Deactivated successfully. Jan 17 12:20:10.757598 containerd[1463]: time="2025-01-17T12:20:10.757538498Z" level=info msg="CreateContainer within sandbox \"b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955\"" Jan 17 12:20:10.759849 containerd[1463]: time="2025-01-17T12:20:10.758279550Z" level=info msg="StartContainer for \"d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955\"" Jan 17 12:20:10.813436 systemd[1]: Started cri-containerd-d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955.scope - libcontainer container d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955. Jan 17 12:20:10.859043 containerd[1463]: time="2025-01-17T12:20:10.858980759Z" level=info msg="StartContainer for \"d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955\" returns successfully" Jan 17 12:20:10.870286 systemd[1]: Started sshd@15-10.128.0.67:22-85.190.243.197:34922.service - OpenSSH per-connection server daemon (85.190.243.197:34922). Jan 17 12:20:10.937205 kubelet[2673]: E0117 12:20:10.936393 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:11.733684 containerd[1463]: time="2025-01-17T12:20:11.733619775Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:20:11.736951 systemd[1]: cri-containerd-d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955.scope: Deactivated successfully. Jan 17 12:20:11.771149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955-rootfs.mount: Deactivated successfully. Jan 17 12:20:11.788217 kubelet[2673]: I0117 12:20:11.788176 2673 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:20:11.821696 kubelet[2673]: I0117 12:20:11.821624 2673 topology_manager.go:215] "Topology Admit Handler" podUID="7822461b-3dc7-4498-bfeb-6f9db1652d5b" podNamespace="kube-system" podName="coredns-76f75df574-l2mgq" Jan 17 12:20:11.835320 kubelet[2673]: I0117 12:20:11.833957 2673 topology_manager.go:215] "Topology Admit Handler" podUID="a76c710f-94e5-4498-855a-6ad309450588" podNamespace="kube-system" podName="coredns-76f75df574-82zdp" Jan 17 12:20:11.835320 kubelet[2673]: I0117 12:20:11.834742 2673 topology_manager.go:215] "Topology Admit Handler" podUID="e6399d16-d230-4697-8af3-6eb4630e54b6" podNamespace="calico-apiserver" podName="calico-apiserver-d4bd59598-xhsbk" Jan 17 12:20:11.841433 kubelet[2673]: W0117 12:20:11.841288 2673 reflector.go:539] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' and this object Jan 17 12:20:11.841433 kubelet[2673]: E0117 12:20:11.841344 2673 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' and this object Jan 17 12:20:11.844608 kubelet[2673]: W0117 12:20:11.842238 2673 reflector.go:539] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' and this object Jan 17 12:20:11.844608 kubelet[2673]: E0117 12:20:11.842279 2673 reflector.go:147] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' and this object Jan 17 12:20:11.844608 kubelet[2673]: I0117 12:20:11.842335 2673 topology_manager.go:215] "Topology Admit Handler" podUID="2cf0d785-6509-49da-ba0a-ba16afb63819" podNamespace="calico-system" podName="calico-kube-controllers-5d855f4c89-2zqf4" Jan 17 12:20:11.845672 kubelet[2673]: I0117 12:20:11.845379 2673 topology_manager.go:215] "Topology Admit Handler" podUID="20b90efd-4f58-4814-b601-0f40e2f4b17f" podNamespace="calico-apiserver" podName="calico-apiserver-d4bd59598-xbgwr" Jan 17 12:20:11.852107 systemd[1]: Created slice kubepods-burstable-pod7822461b_3dc7_4498_bfeb_6f9db1652d5b.slice - libcontainer container kubepods-burstable-pod7822461b_3dc7_4498_bfeb_6f9db1652d5b.slice. Jan 17 12:20:11.864834 systemd[1]: Created slice kubepods-burstable-poda76c710f_94e5_4498_855a_6ad309450588.slice - libcontainer container kubepods-burstable-poda76c710f_94e5_4498_855a_6ad309450588.slice. Jan 17 12:20:11.879112 systemd[1]: Created slice kubepods-besteffort-pode6399d16_d230_4697_8af3_6eb4630e54b6.slice - libcontainer container kubepods-besteffort-pode6399d16_d230_4697_8af3_6eb4630e54b6.slice. Jan 17 12:20:11.889597 systemd[1]: Created slice kubepods-besteffort-pod2cf0d785_6509_49da_ba0a_ba16afb63819.slice - libcontainer container kubepods-besteffort-pod2cf0d785_6509_49da_ba0a_ba16afb63819.slice. Jan 17 12:20:11.905761 systemd[1]: Created slice kubepods-besteffort-pod20b90efd_4f58_4814_b601_0f40e2f4b17f.slice - libcontainer container kubepods-besteffort-pod20b90efd_4f58_4814_b601_0f40e2f4b17f.slice. Jan 17 12:20:11.921161 kubelet[2673]: I0117 12:20:11.920180 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2cf0d785-6509-49da-ba0a-ba16afb63819-tigera-ca-bundle\") pod \"calico-kube-controllers-5d855f4c89-2zqf4\" (UID: \"2cf0d785-6509-49da-ba0a-ba16afb63819\") " pod="calico-system/calico-kube-controllers-5d855f4c89-2zqf4" Jan 17 12:20:11.921161 kubelet[2673]: I0117 12:20:11.920246 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbj5\" (UniqueName: \"kubernetes.io/projected/e6399d16-d230-4697-8af3-6eb4630e54b6-kube-api-access-7kbj5\") pod \"calico-apiserver-d4bd59598-xhsbk\" (UID: \"e6399d16-d230-4697-8af3-6eb4630e54b6\") " pod="calico-apiserver/calico-apiserver-d4bd59598-xhsbk" Jan 17 12:20:11.921161 kubelet[2673]: I0117 12:20:11.920428 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20b90efd-4f58-4814-b601-0f40e2f4b17f-calico-apiserver-certs\") pod \"calico-apiserver-d4bd59598-xbgwr\" (UID: \"20b90efd-4f58-4814-b601-0f40e2f4b17f\") " pod="calico-apiserver/calico-apiserver-d4bd59598-xbgwr" Jan 17 12:20:11.921161 kubelet[2673]: I0117 12:20:11.920602 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjqw9\" (UniqueName: \"kubernetes.io/projected/2cf0d785-6509-49da-ba0a-ba16afb63819-kube-api-access-kjqw9\") pod \"calico-kube-controllers-5d855f4c89-2zqf4\" (UID: \"2cf0d785-6509-49da-ba0a-ba16afb63819\") " pod="calico-system/calico-kube-controllers-5d855f4c89-2zqf4" Jan 17 12:20:11.923003 kubelet[2673]: I0117 12:20:11.922962 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zmgm\" (UniqueName: \"kubernetes.io/projected/7822461b-3dc7-4498-bfeb-6f9db1652d5b-kube-api-access-5zmgm\") pod \"coredns-76f75df574-l2mgq\" (UID: \"7822461b-3dc7-4498-bfeb-6f9db1652d5b\") " pod="kube-system/coredns-76f75df574-l2mgq" Jan 17 12:20:11.974695 kubelet[2673]: I0117 12:20:11.923413 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4d59\" (UniqueName: \"kubernetes.io/projected/a76c710f-94e5-4498-855a-6ad309450588-kube-api-access-p4d59\") pod \"coredns-76f75df574-82zdp\" (UID: \"a76c710f-94e5-4498-855a-6ad309450588\") " pod="kube-system/coredns-76f75df574-82zdp" Jan 17 12:20:11.974695 kubelet[2673]: I0117 12:20:11.923749 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a76c710f-94e5-4498-855a-6ad309450588-config-volume\") pod \"coredns-76f75df574-82zdp\" (UID: \"a76c710f-94e5-4498-855a-6ad309450588\") " pod="kube-system/coredns-76f75df574-82zdp" Jan 17 12:20:11.974695 kubelet[2673]: I0117 12:20:11.924100 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e6399d16-d230-4697-8af3-6eb4630e54b6-calico-apiserver-certs\") pod \"calico-apiserver-d4bd59598-xhsbk\" (UID: \"e6399d16-d230-4697-8af3-6eb4630e54b6\") " pod="calico-apiserver/calico-apiserver-d4bd59598-xhsbk" Jan 17 12:20:11.974695 kubelet[2673]: I0117 12:20:11.924356 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdrx2\" (UniqueName: \"kubernetes.io/projected/20b90efd-4f58-4814-b601-0f40e2f4b17f-kube-api-access-cdrx2\") pod \"calico-apiserver-d4bd59598-xbgwr\" (UID: \"20b90efd-4f58-4814-b601-0f40e2f4b17f\") " pod="calico-apiserver/calico-apiserver-d4bd59598-xbgwr" Jan 17 12:20:11.974695 kubelet[2673]: I0117 12:20:11.924471 2673 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7822461b-3dc7-4498-bfeb-6f9db1652d5b-config-volume\") pod \"coredns-76f75df574-l2mgq\" (UID: \"7822461b-3dc7-4498-bfeb-6f9db1652d5b\") " pod="kube-system/coredns-76f75df574-l2mgq" Jan 17 12:20:12.160893 containerd[1463]: time="2025-01-17T12:20:12.160723944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2mgq,Uid:7822461b-3dc7-4498-bfeb-6f9db1652d5b,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:12.172885 containerd[1463]: time="2025-01-17T12:20:12.172831096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82zdp,Uid:a76c710f-94e5-4498-855a-6ad309450588,Namespace:kube-system,Attempt:0,}" Jan 17 12:20:12.201943 containerd[1463]: time="2025-01-17T12:20:12.201855387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d855f4c89-2zqf4,Uid:2cf0d785-6509-49da-ba0a-ba16afb63819,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:12.248198 sshd[3402]: Invalid user abc from 85.190.243.197 port 34922 Jan 17 12:20:12.332035 kubelet[2673]: I0117 12:20:12.331444 2673 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:20:12.390472 sshd[3402]: Received disconnect from 85.190.243.197 port 34922:11: Bye Bye [preauth] Jan 17 12:20:12.390472 sshd[3402]: Disconnected from invalid user abc 85.190.243.197 port 34922 [preauth] Jan 17 12:20:12.394381 systemd[1]: sshd@15-10.128.0.67:22-85.190.243.197:34922.service: Deactivated successfully. Jan 17 12:20:12.538120 containerd[1463]: time="2025-01-17T12:20:12.538021027Z" level=info msg="shim disconnected" id=d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955 namespace=k8s.io Jan 17 12:20:12.538120 containerd[1463]: time="2025-01-17T12:20:12.538098167Z" level=warning msg="cleaning up after shim disconnected" id=d049439108319947be40e8c49d91cac1c96ac935abcd6ba9e6c0c5011b6db955 namespace=k8s.io Jan 17 12:20:12.538120 containerd[1463]: time="2025-01-17T12:20:12.538115136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:20:12.734553 containerd[1463]: time="2025-01-17T12:20:12.734375496Z" level=error msg="Failed to destroy network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.737163 containerd[1463]: time="2025-01-17T12:20:12.736950915Z" level=error msg="encountered an error cleaning up failed sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.737882 containerd[1463]: time="2025-01-17T12:20:12.737832266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2mgq,Uid:7822461b-3dc7-4498-bfeb-6f9db1652d5b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.738714 kubelet[2673]: E0117 12:20:12.738501 2673 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.738714 kubelet[2673]: E0117 12:20:12.738581 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2mgq" Jan 17 12:20:12.738714 kubelet[2673]: E0117 12:20:12.738616 2673 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-l2mgq" Jan 17 12:20:12.740373 kubelet[2673]: E0117 12:20:12.738695 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-l2mgq_kube-system(7822461b-3dc7-4498-bfeb-6f9db1652d5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-l2mgq_kube-system(7822461b-3dc7-4498-bfeb-6f9db1652d5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l2mgq" podUID="7822461b-3dc7-4498-bfeb-6f9db1652d5b" Jan 17 12:20:12.785886 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35-shm.mount: Deactivated successfully. Jan 17 12:20:12.793968 containerd[1463]: time="2025-01-17T12:20:12.793832639Z" level=error msg="Failed to destroy network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.795690 containerd[1463]: time="2025-01-17T12:20:12.795469003Z" level=error msg="encountered an error cleaning up failed sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.797678 containerd[1463]: time="2025-01-17T12:20:12.797619576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82zdp,Uid:a76c710f-94e5-4498-855a-6ad309450588,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.801174 kubelet[2673]: E0117 12:20:12.798918 2673 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.801174 kubelet[2673]: E0117 12:20:12.798989 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-82zdp" Jan 17 12:20:12.801174 kubelet[2673]: E0117 12:20:12.799025 2673 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-82zdp" Jan 17 12:20:12.801418 kubelet[2673]: E0117 12:20:12.799102 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-82zdp_kube-system(a76c710f-94e5-4498-855a-6ad309450588)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-82zdp_kube-system(a76c710f-94e5-4498-855a-6ad309450588)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-82zdp" podUID="a76c710f-94e5-4498-855a-6ad309450588" Jan 17 12:20:12.801999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a-shm.mount: Deactivated successfully. Jan 17 12:20:12.805406 containerd[1463]: time="2025-01-17T12:20:12.805352135Z" level=error msg="Failed to destroy network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.806246 containerd[1463]: time="2025-01-17T12:20:12.806194052Z" level=error msg="encountered an error cleaning up failed sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.806356 containerd[1463]: time="2025-01-17T12:20:12.806278696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d855f4c89-2zqf4,Uid:2cf0d785-6509-49da-ba0a-ba16afb63819,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.810055 kubelet[2673]: E0117 12:20:12.809493 2673 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:12.810055 kubelet[2673]: E0117 12:20:12.809568 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d855f4c89-2zqf4" Jan 17 12:20:12.810055 kubelet[2673]: E0117 12:20:12.809602 2673 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d855f4c89-2zqf4" Jan 17 12:20:12.810342 kubelet[2673]: E0117 12:20:12.809689 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d855f4c89-2zqf4_calico-system(2cf0d785-6509-49da-ba0a-ba16afb63819)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d855f4c89-2zqf4_calico-system(2cf0d785-6509-49da-ba0a-ba16afb63819)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d855f4c89-2zqf4" podUID="2cf0d785-6509-49da-ba0a-ba16afb63819" Jan 17 12:20:12.811718 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301-shm.mount: Deactivated successfully. Jan 17 12:20:12.944927 systemd[1]: Created slice kubepods-besteffort-podc39613df_5b01_4ed2_aed8_82b1b3948bbf.slice - libcontainer container kubepods-besteffort-podc39613df_5b01_4ed2_aed8_82b1b3948bbf.slice. Jan 17 12:20:12.949659 containerd[1463]: time="2025-01-17T12:20:12.949608682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jsqk,Uid:c39613df-5b01-4ed2-aed8-82b1b3948bbf,Namespace:calico-system,Attempt:0,}" Jan 17 12:20:13.025781 containerd[1463]: time="2025-01-17T12:20:13.025703985Z" level=error msg="Failed to destroy network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.026222 containerd[1463]: time="2025-01-17T12:20:13.026177919Z" level=error msg="encountered an error cleaning up failed sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.026377 containerd[1463]: time="2025-01-17T12:20:13.026257794Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jsqk,Uid:c39613df-5b01-4ed2-aed8-82b1b3948bbf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.026622 kubelet[2673]: E0117 12:20:13.026568 2673 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.027172 kubelet[2673]: E0117 12:20:13.026648 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4jsqk" Jan 17 12:20:13.027172 kubelet[2673]: E0117 12:20:13.026689 2673 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4jsqk" Jan 17 12:20:13.027172 kubelet[2673]: E0117 12:20:13.026769 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4jsqk_calico-system(c39613df-5b01-4ed2-aed8-82b1b3948bbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4jsqk_calico-system(c39613df-5b01-4ed2-aed8-82b1b3948bbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:13.086376 containerd[1463]: time="2025-01-17T12:20:13.086227083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xhsbk,Uid:e6399d16-d230-4697-8af3-6eb4630e54b6,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:20:13.097838 containerd[1463]: time="2025-01-17T12:20:13.096889551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:20:13.099786 kubelet[2673]: I0117 12:20:13.097243 2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:13.099911 containerd[1463]: time="2025-01-17T12:20:13.098165436Z" level=info msg="StopPodSandbox for \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\"" Jan 17 12:20:13.099911 containerd[1463]: time="2025-01-17T12:20:13.098388170Z" level=info msg="Ensure that sandbox 245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a in task-service has been cleanup successfully" Jan 17 12:20:13.106077 kubelet[2673]: I0117 12:20:13.106040 2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:13.106884 containerd[1463]: time="2025-01-17T12:20:13.106831876Z" level=info msg="StopPodSandbox for \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\"" Jan 17 12:20:13.107715 containerd[1463]: time="2025-01-17T12:20:13.107311638Z" level=info msg="Ensure that sandbox 160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35 in task-service has been cleanup successfully" Jan 17 12:20:13.113558 containerd[1463]: time="2025-01-17T12:20:13.113506452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xbgwr,Uid:20b90efd-4f58-4814-b601-0f40e2f4b17f,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:20:13.116471 kubelet[2673]: I0117 12:20:13.116436 2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:13.121730 containerd[1463]: time="2025-01-17T12:20:13.121088693Z" level=info msg="StopPodSandbox for \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\"" Jan 17 12:20:13.121730 containerd[1463]: time="2025-01-17T12:20:13.121363624Z" level=info msg="Ensure that sandbox a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822 in task-service has been cleanup successfully" Jan 17 12:20:13.142540 kubelet[2673]: I0117 12:20:13.142491 2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:13.145903 containerd[1463]: time="2025-01-17T12:20:13.145542291Z" level=info msg="StopPodSandbox for \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\"" Jan 17 12:20:13.145903 containerd[1463]: time="2025-01-17T12:20:13.145780965Z" level=info msg="Ensure that sandbox 4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301 in task-service has been cleanup successfully" Jan 17 12:20:13.274225 containerd[1463]: time="2025-01-17T12:20:13.273979126Z" level=error msg="StopPodSandbox for \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\" failed" error="failed to destroy network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.274569 kubelet[2673]: E0117 12:20:13.274483 2673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:13.274784 kubelet[2673]: E0117 12:20:13.274603 2673 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a"} Jan 17 12:20:13.274784 kubelet[2673]: E0117 12:20:13.274703 2673 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a76c710f-94e5-4498-855a-6ad309450588\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:13.274784 kubelet[2673]: E0117 12:20:13.274753 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a76c710f-94e5-4498-855a-6ad309450588\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-82zdp" podUID="a76c710f-94e5-4498-855a-6ad309450588" Jan 17 12:20:13.285519 containerd[1463]: time="2025-01-17T12:20:13.285316651Z" level=error msg="StopPodSandbox for \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\" failed" error="failed to destroy network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.285901 kubelet[2673]: E0117 12:20:13.285697 2673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:13.285901 kubelet[2673]: E0117 12:20:13.285775 2673 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35"} Jan 17 12:20:13.285901 kubelet[2673]: E0117 12:20:13.285852 2673 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7822461b-3dc7-4498-bfeb-6f9db1652d5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:13.286877 kubelet[2673]: E0117 12:20:13.285926 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7822461b-3dc7-4498-bfeb-6f9db1652d5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-l2mgq" podUID="7822461b-3dc7-4498-bfeb-6f9db1652d5b" Jan 17 12:20:13.290790 containerd[1463]: time="2025-01-17T12:20:13.290718918Z" level=error msg="StopPodSandbox for \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\" failed" error="failed to destroy network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.291075 kubelet[2673]: E0117 12:20:13.291046 2673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:13.291287 kubelet[2673]: E0117 12:20:13.291104 2673 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301"} Jan 17 12:20:13.291287 kubelet[2673]: E0117 12:20:13.291217 2673 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cf0d785-6509-49da-ba0a-ba16afb63819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:13.291287 kubelet[2673]: E0117 12:20:13.291269 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cf0d785-6509-49da-ba0a-ba16afb63819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d855f4c89-2zqf4" podUID="2cf0d785-6509-49da-ba0a-ba16afb63819" Jan 17 12:20:13.314672 containerd[1463]: time="2025-01-17T12:20:13.314170579Z" level=error msg="StopPodSandbox for \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\" failed" error="failed to destroy network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.315285 kubelet[2673]: E0117 12:20:13.314640 2673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:13.315285 kubelet[2673]: E0117 12:20:13.315080 2673 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822"} Jan 17 12:20:13.315285 kubelet[2673]: E0117 12:20:13.315205 2673 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c39613df-5b01-4ed2-aed8-82b1b3948bbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:13.315663 kubelet[2673]: E0117 12:20:13.315263 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c39613df-5b01-4ed2-aed8-82b1b3948bbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4jsqk" podUID="c39613df-5b01-4ed2-aed8-82b1b3948bbf" Jan 17 12:20:13.355256 containerd[1463]: time="2025-01-17T12:20:13.352802855Z" level=error msg="Failed to destroy network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.357019 containerd[1463]: time="2025-01-17T12:20:13.356962174Z" level=error msg="encountered an error cleaning up failed sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.357435 containerd[1463]: time="2025-01-17T12:20:13.357320659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xhsbk,Uid:e6399d16-d230-4697-8af3-6eb4630e54b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.359372 kubelet[2673]: E0117 12:20:13.358041 2673 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.359372 kubelet[2673]: E0117 12:20:13.358113 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4bd59598-xhsbk" Jan 17 12:20:13.359372 kubelet[2673]: E0117 12:20:13.358172 2673 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4bd59598-xhsbk" Jan 17 12:20:13.359665 kubelet[2673]: E0117 12:20:13.358254 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d4bd59598-xhsbk_calico-apiserver(e6399d16-d230-4697-8af3-6eb4630e54b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d4bd59598-xhsbk_calico-apiserver(e6399d16-d230-4697-8af3-6eb4630e54b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d4bd59598-xhsbk" podUID="e6399d16-d230-4697-8af3-6eb4630e54b6" Jan 17 12:20:13.378118 containerd[1463]: time="2025-01-17T12:20:13.377953714Z" level=error msg="Failed to destroy network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.378945 containerd[1463]: time="2025-01-17T12:20:13.378706797Z" level=error msg="encountered an error cleaning up failed sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.378945 containerd[1463]: time="2025-01-17T12:20:13.378833913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xbgwr,Uid:20b90efd-4f58-4814-b601-0f40e2f4b17f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.380819 kubelet[2673]: E0117 12:20:13.379467 2673 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:13.380819 kubelet[2673]: E0117 12:20:13.379545 2673 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4bd59598-xbgwr" Jan 17 12:20:13.380819 kubelet[2673]: E0117 12:20:13.379578 2673 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d4bd59598-xbgwr" Jan 17 12:20:13.381082 kubelet[2673]: E0117 12:20:13.379656 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d4bd59598-xbgwr_calico-apiserver(20b90efd-4f58-4814-b601-0f40e2f4b17f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d4bd59598-xbgwr_calico-apiserver(20b90efd-4f58-4814-b601-0f40e2f4b17f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d4bd59598-xbgwr" podUID="20b90efd-4f58-4814-b601-0f40e2f4b17f" Jan 17 12:20:14.142324 kubelet[2673]: I0117 12:20:14.141787 2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:14.143823 containerd[1463]: time="2025-01-17T12:20:14.143315776Z" level=info msg="StopPodSandbox for \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\"" Jan 17 12:20:14.143823 containerd[1463]: time="2025-01-17T12:20:14.143561848Z" level=info msg="Ensure that sandbox de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a in task-service has been cleanup successfully" Jan 17 12:20:14.146486 kubelet[2673]: I0117 12:20:14.146310 2673 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:14.147959 containerd[1463]: time="2025-01-17T12:20:14.147792996Z" level=info msg="StopPodSandbox for \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\"" Jan 17 12:20:14.148643 containerd[1463]: time="2025-01-17T12:20:14.148314949Z" level=info msg="Ensure that sandbox 4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862 in task-service has been cleanup successfully" Jan 17 12:20:14.221978 containerd[1463]: time="2025-01-17T12:20:14.221620757Z" level=error msg="StopPodSandbox for \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\" failed" error="failed to destroy network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:14.222184 kubelet[2673]: E0117 12:20:14.222018 2673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:14.222184 kubelet[2673]: E0117 12:20:14.222075 2673 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a"} Jan 17 12:20:14.222184 kubelet[2673]: E0117 12:20:14.222144 2673 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20b90efd-4f58-4814-b601-0f40e2f4b17f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:14.222436 kubelet[2673]: E0117 12:20:14.222195 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20b90efd-4f58-4814-b601-0f40e2f4b17f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d4bd59598-xbgwr" podUID="20b90efd-4f58-4814-b601-0f40e2f4b17f" Jan 17 12:20:14.223508 containerd[1463]: time="2025-01-17T12:20:14.223455184Z" level=error msg="StopPodSandbox for \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\" failed" error="failed to destroy network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:20:14.223744 kubelet[2673]: E0117 12:20:14.223719 2673 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:14.223847 kubelet[2673]: E0117 12:20:14.223771 2673 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862"} Jan 17 12:20:14.223847 kubelet[2673]: E0117 12:20:14.223832 2673 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6399d16-d230-4697-8af3-6eb4630e54b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:20:14.224039 kubelet[2673]: E0117 12:20:14.223885 2673 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6399d16-d230-4697-8af3-6eb4630e54b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d4bd59598-xhsbk" podUID="e6399d16-d230-4697-8af3-6eb4630e54b6" Jan 17 12:20:16.932950 systemd[1]: Started sshd@16-10.128.0.67:22-51.178.141.222:47066.service - OpenSSH per-connection server daemon (51.178.141.222:47066). Jan 17 12:20:17.570802 sshd[3737]: Invalid user nagios from 51.178.141.222 port 47066 Jan 17 12:20:17.688069 sshd[3737]: Received disconnect from 51.178.141.222 port 47066:11: Bye Bye [preauth] Jan 17 12:20:17.688069 sshd[3737]: Disconnected from invalid user nagios 51.178.141.222 port 47066 [preauth] Jan 17 12:20:17.692385 systemd[1]: sshd@16-10.128.0.67:22-51.178.141.222:47066.service: Deactivated successfully. Jan 17 12:20:20.425463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115759161.mount: Deactivated successfully. Jan 17 12:20:20.486099 containerd[1463]: time="2025-01-17T12:20:20.486018518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:20.487692 containerd[1463]: time="2025-01-17T12:20:20.487604624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:20:20.489508 containerd[1463]: time="2025-01-17T12:20:20.489422639Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:20.493681 containerd[1463]: time="2025-01-17T12:20:20.493621137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:20.494930 containerd[1463]: time="2025-01-17T12:20:20.494663099Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.397722274s" Jan 17 12:20:20.494930 containerd[1463]: time="2025-01-17T12:20:20.494720120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:20:20.524011 containerd[1463]: time="2025-01-17T12:20:20.523793415Z" level=info msg="CreateContainer within sandbox \"b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:20:20.549531 containerd[1463]: time="2025-01-17T12:20:20.549490640Z" level=info msg="CreateContainer within sandbox \"b313129739e3edfce1f49eb5ed32e57873fbf7e76f86661850623ed968976819\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ea56bf1dbc779c5a76ec8e889190a5b5070ff32d2acd475e7408068aef2910e1\"" Jan 17 12:20:20.550649 containerd[1463]: time="2025-01-17T12:20:20.550392971Z" level=info msg="StartContainer for \"ea56bf1dbc779c5a76ec8e889190a5b5070ff32d2acd475e7408068aef2910e1\"" Jan 17 12:20:20.590452 systemd[1]: Started cri-containerd-ea56bf1dbc779c5a76ec8e889190a5b5070ff32d2acd475e7408068aef2910e1.scope - libcontainer container ea56bf1dbc779c5a76ec8e889190a5b5070ff32d2acd475e7408068aef2910e1. Jan 17 12:20:20.637162 containerd[1463]: time="2025-01-17T12:20:20.634003940Z" level=info msg="StartContainer for \"ea56bf1dbc779c5a76ec8e889190a5b5070ff32d2acd475e7408068aef2910e1\" returns successfully" Jan 17 12:20:20.746251 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:20:20.746462 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:20:21.198192 kubelet[2673]: I0117 12:20:21.198113 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-hgmf4" podStartSLOduration=1.826517184 podStartE2EDuration="21.198014112s" podCreationTimestamp="2025-01-17 12:20:00 +0000 UTC" firstStartedPulling="2025-01-17 12:20:01.123912486 +0000 UTC m=+26.362902668" lastFinishedPulling="2025-01-17 12:20:20.495409423 +0000 UTC m=+45.734399596" observedRunningTime="2025-01-17 12:20:21.192392507 +0000 UTC m=+46.431382695" watchObservedRunningTime="2025-01-17 12:20:21.198014112 +0000 UTC m=+46.437004297" Jan 17 12:20:22.222043 systemd[1]: run-containerd-runc-k8s.io-ea56bf1dbc779c5a76ec8e889190a5b5070ff32d2acd475e7408068aef2910e1-runc.klgtNh.mount: Deactivated successfully. Jan 17 12:20:22.647172 kernel: bpftool[3978]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:20:22.930695 systemd-networkd[1370]: vxlan.calico: Link UP Jan 17 12:20:22.930709 systemd-networkd[1370]: vxlan.calico: Gained carrier Jan 17 12:20:24.855332 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Jan 17 12:20:24.939986 containerd[1463]: time="2025-01-17T12:20:24.938915877Z" level=info msg="StopPodSandbox for \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\"" Jan 17 12:20:24.939986 containerd[1463]: time="2025-01-17T12:20:24.939533738Z" level=info msg="StopPodSandbox for \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\"" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.027 [INFO][4077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.028 [INFO][4077] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" iface="eth0" netns="/var/run/netns/cni-842b1ddf-d3ca-4bc8-7785-2f88918a47a7" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.030 [INFO][4077] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" iface="eth0" netns="/var/run/netns/cni-842b1ddf-d3ca-4bc8-7785-2f88918a47a7" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.032 [INFO][4077] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" iface="eth0" netns="/var/run/netns/cni-842b1ddf-d3ca-4bc8-7785-2f88918a47a7" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.032 [INFO][4077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.032 [INFO][4077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.081 [INFO][4089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.081 [INFO][4089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.081 [INFO][4089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.091 [WARNING][4089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.091 [INFO][4089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.095 [INFO][4089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:25.111315 containerd[1463]: 2025-01-17 12:20:25.104 [INFO][4077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:25.111315 containerd[1463]: time="2025-01-17T12:20:25.107281713Z" level=info msg="TearDown network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\" successfully" Jan 17 12:20:25.111315 containerd[1463]: time="2025-01-17T12:20:25.107323990Z" level=info msg="StopPodSandbox for \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\" returns successfully" Jan 17 12:20:25.111315 containerd[1463]: time="2025-01-17T12:20:25.110823438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jsqk,Uid:c39613df-5b01-4ed2-aed8-82b1b3948bbf,Namespace:calico-system,Attempt:1,}" Jan 17 12:20:25.116812 systemd[1]: run-netns-cni\x2d842b1ddf\x2dd3ca\x2d4bc8\x2d7785\x2d2f88918a47a7.mount: Deactivated successfully. Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.025 [INFO][4073] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.031 [INFO][4073] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" iface="eth0" netns="/var/run/netns/cni-7186e773-a824-6049-bcf7-99c66f669829" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.034 [INFO][4073] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" iface="eth0" netns="/var/run/netns/cni-7186e773-a824-6049-bcf7-99c66f669829" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.035 [INFO][4073] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" iface="eth0" netns="/var/run/netns/cni-7186e773-a824-6049-bcf7-99c66f669829" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.036 [INFO][4073] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.036 [INFO][4073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.093 [INFO][4093] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.094 [INFO][4093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.096 [INFO][4093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.108 [WARNING][4093] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.108 [INFO][4093] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.114 [INFO][4093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:25.121652 containerd[1463]: 2025-01-17 12:20:25.119 [INFO][4073] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:25.123035 containerd[1463]: time="2025-01-17T12:20:25.121814425Z" level=info msg="TearDown network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\" successfully" Jan 17 12:20:25.123035 containerd[1463]: time="2025-01-17T12:20:25.121858401Z" level=info msg="StopPodSandbox for \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\" returns successfully" Jan 17 12:20:25.124330 containerd[1463]: time="2025-01-17T12:20:25.123971356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xbgwr,Uid:20b90efd-4f58-4814-b601-0f40e2f4b17f,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:20:25.127916 systemd[1]: run-netns-cni\x2d7186e773\x2da824\x2d6049\x2dbcf7\x2d99c66f669829.mount: Deactivated successfully. Jan 17 12:20:25.366108 systemd-networkd[1370]: calibdcdd86e839: Link UP Jan 17 12:20:25.370330 systemd-networkd[1370]: calibdcdd86e839: Gained carrier Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.246 [INFO][4112] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0 calico-apiserver-d4bd59598- calico-apiserver 20b90efd-4f58-4814-b601-0f40e2f4b17f 751 0 2025-01-17 12:20:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d4bd59598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal calico-apiserver-d4bd59598-xbgwr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibdcdd86e839 [] []}} ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.246 [INFO][4112] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.309 [INFO][4125] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" HandleID="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.323 [INFO][4125] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" HandleID="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", "pod":"calico-apiserver-d4bd59598-xbgwr", "timestamp":"2025-01-17 12:20:25.309565258 +0000 UTC"}, Hostname:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.323 [INFO][4125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.323 [INFO][4125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.323 [INFO][4125] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.326 [INFO][4125] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.333 [INFO][4125] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.339 [INFO][4125] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.340 [INFO][4125] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.343 [INFO][4125] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.343 [INFO][4125] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.345 [INFO][4125] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.350 [INFO][4125] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.357 [INFO][4125] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.129/26] block=192.168.53.128/26 handle="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.357 [INFO][4125] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.129/26] handle="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.357 [INFO][4125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:25.400647 containerd[1463]: 2025-01-17 12:20:25.357 [INFO][4125] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.129/26] IPv6=[] ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" HandleID="k8s-pod-network.3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.404819 containerd[1463]: 2025-01-17 12:20:25.360 [INFO][4112] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b90efd-4f58-4814-b601-0f40e2f4b17f", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-d4bd59598-xbgwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdcdd86e839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:25.404819 containerd[1463]: 2025-01-17 12:20:25.361 [INFO][4112] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.129/32] ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.404819 containerd[1463]: 2025-01-17 12:20:25.361 [INFO][4112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdcdd86e839 ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.404819 containerd[1463]: 2025-01-17 12:20:25.371 [INFO][4112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.404819 containerd[1463]: 2025-01-17 12:20:25.372 [INFO][4112] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b90efd-4f58-4814-b601-0f40e2f4b17f", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da", Pod:"calico-apiserver-d4bd59598-xbgwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdcdd86e839", MAC:"b2:62:b3:ff:ec:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:25.404819 containerd[1463]: 2025-01-17 12:20:25.394 [INFO][4112] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xbgwr" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:25.446940 systemd-networkd[1370]: caliea06cda453e: Link UP Jan 17 12:20:25.451199 systemd-networkd[1370]: caliea06cda453e: Gained carrier Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.265 [INFO][4102] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0 csi-node-driver- calico-system c39613df-5b01-4ed2-aed8-82b1b3948bbf 752 0 2025-01-17 12:20:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal csi-node-driver-4jsqk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliea06cda453e [] []}} ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.265 [INFO][4102] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.319 [INFO][4129] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" HandleID="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.336 [INFO][4129] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" HandleID="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", "pod":"csi-node-driver-4jsqk", "timestamp":"2025-01-17 12:20:25.319845954 +0000 UTC"}, Hostname:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.336 [INFO][4129] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.358 [INFO][4129] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.358 [INFO][4129] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.361 [INFO][4129] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.374 [INFO][4129] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.391 [INFO][4129] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.395 [INFO][4129] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.406 [INFO][4129] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.407 [INFO][4129] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.410 [INFO][4129] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.420 [INFO][4129] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.433 [INFO][4129] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.130/26] block=192.168.53.128/26 handle="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.433 [INFO][4129] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.130/26] handle="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.433 [INFO][4129] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:25.492546 containerd[1463]: 2025-01-17 12:20:25.433 [INFO][4129] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.130/26] IPv6=[] ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" HandleID="k8s-pod-network.913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.495818 containerd[1463]: 2025-01-17 12:20:25.438 [INFO][4102] cni-plugin/k8s.go 386: Populated endpoint ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c39613df-5b01-4ed2-aed8-82b1b3948bbf", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-4jsqk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea06cda453e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:25.495818 containerd[1463]: 2025-01-17 12:20:25.438 [INFO][4102] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.130/32] ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.495818 containerd[1463]: 2025-01-17 12:20:25.440 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea06cda453e ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.495818 containerd[1463]: 2025-01-17 12:20:25.455 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.495818 containerd[1463]: 2025-01-17 12:20:25.459 [INFO][4102] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c39613df-5b01-4ed2-aed8-82b1b3948bbf", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c", Pod:"csi-node-driver-4jsqk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea06cda453e", MAC:"6e:3d:33:68:db:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:25.495818 containerd[1463]: 2025-01-17 12:20:25.485 [INFO][4102] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c" Namespace="calico-system" Pod="csi-node-driver-4jsqk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:25.498634 containerd[1463]: time="2025-01-17T12:20:25.497759645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:25.498634 containerd[1463]: time="2025-01-17T12:20:25.497876096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:25.498634 containerd[1463]: time="2025-01-17T12:20:25.497903353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:25.499767 containerd[1463]: time="2025-01-17T12:20:25.498398296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:25.543351 systemd[1]: Started cri-containerd-3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da.scope - libcontainer container 3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da. Jan 17 12:20:25.567550 containerd[1463]: time="2025-01-17T12:20:25.567385854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:25.567550 containerd[1463]: time="2025-01-17T12:20:25.567470713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:25.567550 containerd[1463]: time="2025-01-17T12:20:25.567499444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:25.568696 containerd[1463]: time="2025-01-17T12:20:25.567644653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:25.598403 systemd[1]: Started cri-containerd-913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c.scope - libcontainer container 913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c. Jan 17 12:20:25.625217 containerd[1463]: time="2025-01-17T12:20:25.624808826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xbgwr,Uid:20b90efd-4f58-4814-b601-0f40e2f4b17f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da\"" Jan 17 12:20:25.628388 containerd[1463]: time="2025-01-17T12:20:25.627908208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:20:25.650599 containerd[1463]: time="2025-01-17T12:20:25.650492634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4jsqk,Uid:c39613df-5b01-4ed2-aed8-82b1b3948bbf,Namespace:calico-system,Attempt:1,} returns sandbox id \"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c\"" Jan 17 12:20:26.519669 systemd-networkd[1370]: calibdcdd86e839: Gained IPv6LL Jan 17 12:20:26.939668 containerd[1463]: time="2025-01-17T12:20:26.939098532Z" level=info msg="StopPodSandbox for \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\"" Jan 17 12:20:26.939668 containerd[1463]: time="2025-01-17T12:20:26.939335237Z" level=info msg="StopPodSandbox for \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\"" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.092 [INFO][4274] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.092 [INFO][4274] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" iface="eth0" netns="/var/run/netns/cni-3f5acea2-1065-3b6f-417e-94554df136e2" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.095 [INFO][4274] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" iface="eth0" netns="/var/run/netns/cni-3f5acea2-1065-3b6f-417e-94554df136e2" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.096 [INFO][4274] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" iface="eth0" netns="/var/run/netns/cni-3f5acea2-1065-3b6f-417e-94554df136e2" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.096 [INFO][4274] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.096 [INFO][4274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.148 [INFO][4287] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.149 [INFO][4287] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.149 [INFO][4287] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.160 [WARNING][4287] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.160 [INFO][4287] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.162 [INFO][4287] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:27.165816 containerd[1463]: 2025-01-17 12:20:27.163 [INFO][4274] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:27.171874 containerd[1463]: time="2025-01-17T12:20:27.171228789Z" level=info msg="TearDown network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\" successfully" Jan 17 12:20:27.171874 containerd[1463]: time="2025-01-17T12:20:27.171419219Z" level=info msg="StopPodSandbox for \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\" returns successfully" Jan 17 12:20:27.177709 containerd[1463]: time="2025-01-17T12:20:27.173421306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82zdp,Uid:a76c710f-94e5-4498-855a-6ad309450588,Namespace:kube-system,Attempt:1,}" Jan 17 12:20:27.180027 systemd[1]: run-netns-cni\x2d3f5acea2\x2d1065\x2d3b6f\x2d417e\x2d94554df136e2.mount: Deactivated successfully. Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.111 [INFO][4275] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.111 [INFO][4275] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" iface="eth0" netns="/var/run/netns/cni-fa79055f-1128-388d-6bb7-7f8bffcebeea" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.111 [INFO][4275] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" iface="eth0" netns="/var/run/netns/cni-fa79055f-1128-388d-6bb7-7f8bffcebeea" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.114 [INFO][4275] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" iface="eth0" netns="/var/run/netns/cni-fa79055f-1128-388d-6bb7-7f8bffcebeea" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.115 [INFO][4275] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.115 [INFO][4275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.223 [INFO][4291] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.223 [INFO][4291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.224 [INFO][4291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.241 [WARNING][4291] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.241 [INFO][4291] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.244 [INFO][4291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:27.255495 containerd[1463]: 2025-01-17 12:20:27.248 [INFO][4275] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:27.260551 containerd[1463]: time="2025-01-17T12:20:27.255560015Z" level=info msg="TearDown network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\" successfully" Jan 17 12:20:27.260551 containerd[1463]: time="2025-01-17T12:20:27.255597407Z" level=info msg="StopPodSandbox for \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\" returns successfully" Jan 17 12:20:27.260551 containerd[1463]: time="2025-01-17T12:20:27.258736632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2mgq,Uid:7822461b-3dc7-4498-bfeb-6f9db1652d5b,Namespace:kube-system,Attempt:1,}" Jan 17 12:20:27.268568 systemd[1]: run-netns-cni\x2dfa79055f\x2d1128\x2d388d\x2d6bb7\x2d7f8bffcebeea.mount: Deactivated successfully. Jan 17 12:20:27.416948 systemd-networkd[1370]: caliea06cda453e: Gained IPv6LL Jan 17 12:20:27.551114 systemd-networkd[1370]: cali3aca300d9cd: Link UP Jan 17 12:20:27.555670 systemd-networkd[1370]: cali3aca300d9cd: Gained carrier Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.375 [INFO][4314] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0 coredns-76f75df574- kube-system 7822461b-3dc7-4498-bfeb-6f9db1652d5b 768 0 2025-01-17 12:19:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal coredns-76f75df574-l2mgq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3aca300d9cd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.375 [INFO][4314] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.461 [INFO][4330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" HandleID="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.487 [INFO][4330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" HandleID="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef5d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", "pod":"coredns-76f75df574-l2mgq", "timestamp":"2025-01-17 12:20:27.461543578 +0000 UTC"}, Hostname:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.488 [INFO][4330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.488 [INFO][4330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.488 [INFO][4330] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.491 [INFO][4330] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.498 [INFO][4330] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.506 [INFO][4330] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.510 [INFO][4330] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.516 [INFO][4330] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.516 [INFO][4330] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.518 [INFO][4330] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210 Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.525 [INFO][4330] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.538 [INFO][4330] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.131/26] block=192.168.53.128/26 handle="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.538 [INFO][4330] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.131/26] handle="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.538 [INFO][4330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:27.606968 containerd[1463]: 2025-01-17 12:20:27.538 [INFO][4330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.131/26] IPv6=[] ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" HandleID="k8s-pod-network.6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.609274 containerd[1463]: 2025-01-17 12:20:27.542 [INFO][4314] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7822461b-3dc7-4498-bfeb-6f9db1652d5b", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-l2mgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aca300d9cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:27.609274 containerd[1463]: 2025-01-17 12:20:27.542 [INFO][4314] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.131/32] ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.609274 containerd[1463]: 2025-01-17 12:20:27.542 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3aca300d9cd ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.609274 containerd[1463]: 2025-01-17 12:20:27.559 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.609274 containerd[1463]: 2025-01-17 12:20:27.562 [INFO][4314] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7822461b-3dc7-4498-bfeb-6f9db1652d5b", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210", Pod:"coredns-76f75df574-l2mgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aca300d9cd", MAC:"1a:b9:42:af:d8:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:27.609274 containerd[1463]: 2025-01-17 12:20:27.596 [INFO][4314] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210" Namespace="kube-system" Pod="coredns-76f75df574-l2mgq" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:27.675181 systemd-networkd[1370]: cali821b0cf0ab3: Link UP Jan 17 12:20:27.678969 systemd-networkd[1370]: cali821b0cf0ab3: Gained carrier Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.342 [INFO][4300] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0 coredns-76f75df574- kube-system a76c710f-94e5-4498-855a-6ad309450588 767 0 2025-01-17 12:19:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal coredns-76f75df574-82zdp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali821b0cf0ab3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.343 [INFO][4300] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.465 [INFO][4326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" HandleID="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.487 [INFO][4326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" HandleID="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000293750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", "pod":"coredns-76f75df574-82zdp", "timestamp":"2025-01-17 12:20:27.465721116 +0000 UTC"}, Hostname:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.488 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.538 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.538 [INFO][4326] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.543 [INFO][4326] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.569 [INFO][4326] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.582 [INFO][4326] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.588 [INFO][4326] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.600 [INFO][4326] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.600 [INFO][4326] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.607 [INFO][4326] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760 Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.617 [INFO][4326] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.632 [INFO][4326] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.132/26] block=192.168.53.128/26 handle="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.633 [INFO][4326] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.132/26] handle="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.634 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:27.715089 containerd[1463]: 2025-01-17 12:20:27.634 [INFO][4326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.132/26] IPv6=[] ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" HandleID="k8s-pod-network.67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.718333 containerd[1463]: 2025-01-17 12:20:27.644 [INFO][4300] cni-plugin/k8s.go 386: Populated endpoint ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a76c710f-94e5-4498-855a-6ad309450588", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-82zdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali821b0cf0ab3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:27.718333 containerd[1463]: 2025-01-17 12:20:27.646 [INFO][4300] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.132/32] ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.718333 containerd[1463]: 2025-01-17 12:20:27.646 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali821b0cf0ab3 ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.718333 containerd[1463]: 2025-01-17 12:20:27.681 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.718333 containerd[1463]: 2025-01-17 12:20:27.682 [INFO][4300] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a76c710f-94e5-4498-855a-6ad309450588", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760", Pod:"coredns-76f75df574-82zdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali821b0cf0ab3", MAC:"b6:a6:15:47:8a:66", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:27.718333 containerd[1463]: 2025-01-17 12:20:27.707 [INFO][4300] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760" Namespace="kube-system" Pod="coredns-76f75df574-82zdp" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:27.736244 containerd[1463]: time="2025-01-17T12:20:27.722986622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:27.736244 containerd[1463]: time="2025-01-17T12:20:27.723086366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:27.736244 containerd[1463]: time="2025-01-17T12:20:27.723108338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:27.736244 containerd[1463]: time="2025-01-17T12:20:27.723270526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:27.805407 systemd[1]: Started cri-containerd-6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210.scope - libcontainer container 6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210. Jan 17 12:20:27.823064 containerd[1463]: time="2025-01-17T12:20:27.822768040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:27.823064 containerd[1463]: time="2025-01-17T12:20:27.822880295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:27.824050 containerd[1463]: time="2025-01-17T12:20:27.822901710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:27.824050 containerd[1463]: time="2025-01-17T12:20:27.823038887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:27.890405 systemd[1]: Started cri-containerd-67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760.scope - libcontainer container 67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760. Jan 17 12:20:27.939367 containerd[1463]: time="2025-01-17T12:20:27.937894839Z" level=info msg="StopPodSandbox for \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\"" Jan 17 12:20:27.958533 containerd[1463]: time="2025-01-17T12:20:27.957101104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-l2mgq,Uid:7822461b-3dc7-4498-bfeb-6f9db1652d5b,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210\"" Jan 17 12:20:27.969916 containerd[1463]: time="2025-01-17T12:20:27.969368775Z" level=info msg="CreateContainer within sandbox \"6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:20:28.014519 containerd[1463]: time="2025-01-17T12:20:28.014275620Z" level=info msg="CreateContainer within sandbox \"6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"848fbe6234d7486b9110ae35c6f8e298757d9fcf9bd57380c83330476b992fab\"" Jan 17 12:20:28.018236 containerd[1463]: time="2025-01-17T12:20:28.018155218Z" level=info msg="StartContainer for \"848fbe6234d7486b9110ae35c6f8e298757d9fcf9bd57380c83330476b992fab\"" Jan 17 12:20:28.041453 containerd[1463]: time="2025-01-17T12:20:28.040739487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-82zdp,Uid:a76c710f-94e5-4498-855a-6ad309450588,Namespace:kube-system,Attempt:1,} returns sandbox id \"67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760\"" Jan 17 12:20:28.054158 containerd[1463]: time="2025-01-17T12:20:28.053947623Z" level=info msg="CreateContainer within sandbox \"67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:20:28.089953 containerd[1463]: time="2025-01-17T12:20:28.087389469Z" level=info msg="CreateContainer within sandbox \"67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cf8a29a557f42364f44cab234690dc6d7a397f00ce5ec0bfd962bc0df4bf406\"" Jan 17 12:20:28.091892 containerd[1463]: time="2025-01-17T12:20:28.091836863Z" level=info msg="StartContainer for \"8cf8a29a557f42364f44cab234690dc6d7a397f00ce5ec0bfd962bc0df4bf406\"" Jan 17 12:20:28.110815 systemd[1]: Started cri-containerd-848fbe6234d7486b9110ae35c6f8e298757d9fcf9bd57380c83330476b992fab.scope - libcontainer container 848fbe6234d7486b9110ae35c6f8e298757d9fcf9bd57380c83330476b992fab. Jan 17 12:20:28.236441 systemd[1]: Started cri-containerd-8cf8a29a557f42364f44cab234690dc6d7a397f00ce5ec0bfd962bc0df4bf406.scope - libcontainer container 8cf8a29a557f42364f44cab234690dc6d7a397f00ce5ec0bfd962bc0df4bf406. Jan 17 12:20:28.305338 containerd[1463]: time="2025-01-17T12:20:28.304776579Z" level=info msg="StartContainer for \"848fbe6234d7486b9110ae35c6f8e298757d9fcf9bd57380c83330476b992fab\" returns successfully" Jan 17 12:20:28.478611 containerd[1463]: time="2025-01-17T12:20:28.478343004Z" level=info msg="StartContainer for \"8cf8a29a557f42364f44cab234690dc6d7a397f00ce5ec0bfd962bc0df4bf406\" returns successfully" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.318 [INFO][4450] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.318 [INFO][4450] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" iface="eth0" netns="/var/run/netns/cni-216d0210-86e1-ddf9-b219-a752706b87a9" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.320 [INFO][4450] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" iface="eth0" netns="/var/run/netns/cni-216d0210-86e1-ddf9-b219-a752706b87a9" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.321 [INFO][4450] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" iface="eth0" netns="/var/run/netns/cni-216d0210-86e1-ddf9-b219-a752706b87a9" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.322 [INFO][4450] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.322 [INFO][4450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.461 [INFO][4521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.461 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.461 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.489 [WARNING][4521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.489 [INFO][4521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.492 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:28.500387 containerd[1463]: 2025-01-17 12:20:28.495 [INFO][4450] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:28.502207 containerd[1463]: time="2025-01-17T12:20:28.501451178Z" level=info msg="TearDown network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\" successfully" Jan 17 12:20:28.502207 containerd[1463]: time="2025-01-17T12:20:28.501491924Z" level=info msg="StopPodSandbox for \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\" returns successfully" Jan 17 12:20:28.502447 containerd[1463]: time="2025-01-17T12:20:28.502412214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d855f4c89-2zqf4,Uid:2cf0d785-6509-49da-ba0a-ba16afb63819,Namespace:calico-system,Attempt:1,}" Jan 17 12:20:28.513451 systemd[1]: run-netns-cni\x2d216d0210\x2d86e1\x2dddf9\x2db219\x2da752706b87a9.mount: Deactivated successfully. Jan 17 12:20:28.631388 systemd-networkd[1370]: cali3aca300d9cd: Gained IPv6LL Jan 17 12:20:28.867980 systemd-networkd[1370]: calid37635dd651: Link UP Jan 17 12:20:28.869949 systemd-networkd[1370]: calid37635dd651: Gained carrier Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.720 [INFO][4546] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0 calico-kube-controllers-5d855f4c89- calico-system 2cf0d785-6509-49da-ba0a-ba16afb63819 781 0 2025-01-17 12:20:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d855f4c89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal calico-kube-controllers-5d855f4c89-2zqf4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid37635dd651 [] []}} ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.722 [INFO][4546] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.790 [INFO][4563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" HandleID="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.808 [INFO][4563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" HandleID="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002937e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", "pod":"calico-kube-controllers-5d855f4c89-2zqf4", "timestamp":"2025-01-17 12:20:28.79035761 +0000 UTC"}, Hostname:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.808 [INFO][4563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.808 [INFO][4563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.809 [INFO][4563] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.811 [INFO][4563] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.819 [INFO][4563] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.826 [INFO][4563] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.830 [INFO][4563] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.834 [INFO][4563] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.834 [INFO][4563] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.836 [INFO][4563] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6 Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.844 [INFO][4563] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.856 [INFO][4563] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.133/26] block=192.168.53.128/26 handle="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.857 [INFO][4563] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.133/26] handle="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.857 [INFO][4563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:28.896830 containerd[1463]: 2025-01-17 12:20:28.857 [INFO][4563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.133/26] IPv6=[] ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" HandleID="k8s-pod-network.c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.898039 containerd[1463]: 2025-01-17 12:20:28.862 [INFO][4546] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0", GenerateName:"calico-kube-controllers-5d855f4c89-", Namespace:"calico-system", SelfLink:"", UID:"2cf0d785-6509-49da-ba0a-ba16afb63819", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d855f4c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-5d855f4c89-2zqf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid37635dd651", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:28.898039 containerd[1463]: 2025-01-17 12:20:28.862 [INFO][4546] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.133/32] ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.898039 containerd[1463]: 2025-01-17 12:20:28.862 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid37635dd651 ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.898039 containerd[1463]: 2025-01-17 12:20:28.868 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.898039 containerd[1463]: 2025-01-17 12:20:28.870 [INFO][4546] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0", GenerateName:"calico-kube-controllers-5d855f4c89-", Namespace:"calico-system", SelfLink:"", UID:"2cf0d785-6509-49da-ba0a-ba16afb63819", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d855f4c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6", Pod:"calico-kube-controllers-5d855f4c89-2zqf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid37635dd651", MAC:"ca:7e:92:d9:08:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:28.898039 containerd[1463]: 2025-01-17 12:20:28.889 [INFO][4546] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6" Namespace="calico-system" Pod="calico-kube-controllers-5d855f4c89-2zqf4" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:28.988869 containerd[1463]: time="2025-01-17T12:20:28.987107191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:28.988869 containerd[1463]: time="2025-01-17T12:20:28.988030329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:28.988869 containerd[1463]: time="2025-01-17T12:20:28.988054525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:28.988869 containerd[1463]: time="2025-01-17T12:20:28.988241355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:29.022669 systemd[1]: Started cri-containerd-c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6.scope - libcontainer container c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6. Jan 17 12:20:29.188922 containerd[1463]: time="2025-01-17T12:20:29.188693582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d855f4c89-2zqf4,Uid:2cf0d785-6509-49da-ba0a-ba16afb63819,Namespace:calico-system,Attempt:1,} returns sandbox id \"c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6\"" Jan 17 12:20:29.272235 systemd-networkd[1370]: cali821b0cf0ab3: Gained IPv6LL Jan 17 12:20:29.321627 containerd[1463]: time="2025-01-17T12:20:29.321559607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:29.325293 containerd[1463]: time="2025-01-17T12:20:29.325213818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:20:29.328434 containerd[1463]: time="2025-01-17T12:20:29.328370988Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:29.338169 containerd[1463]: time="2025-01-17T12:20:29.337277294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:29.338169 containerd[1463]: time="2025-01-17T12:20:29.337993841Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.71003706s" Jan 17 12:20:29.338169 containerd[1463]: time="2025-01-17T12:20:29.338041394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:20:29.339538 containerd[1463]: time="2025-01-17T12:20:29.339472758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:20:29.343516 containerd[1463]: time="2025-01-17T12:20:29.343454185Z" level=info msg="CreateContainer within sandbox \"3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:20:29.392345 containerd[1463]: time="2025-01-17T12:20:29.392268680Z" level=info msg="CreateContainer within sandbox \"3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5546e68444feaf1c646cf849a32bec28f60f404505411cd1722821bb74799bd9\"" Jan 17 12:20:29.397282 containerd[1463]: time="2025-01-17T12:20:29.395331313Z" level=info msg="StartContainer for \"5546e68444feaf1c646cf849a32bec28f60f404505411cd1722821bb74799bd9\"" Jan 17 12:20:29.436731 kubelet[2673]: I0117 12:20:29.436681 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-l2mgq" podStartSLOduration=39.436611057 podStartE2EDuration="39.436611057s" podCreationTimestamp="2025-01-17 12:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:29.39707665 +0000 UTC m=+54.636066859" watchObservedRunningTime="2025-01-17 12:20:29.436611057 +0000 UTC m=+54.675601242" Jan 17 12:20:29.494437 systemd[1]: Started cri-containerd-5546e68444feaf1c646cf849a32bec28f60f404505411cd1722821bb74799bd9.scope - libcontainer container 5546e68444feaf1c646cf849a32bec28f60f404505411cd1722821bb74799bd9. Jan 17 12:20:29.600000 containerd[1463]: time="2025-01-17T12:20:29.599882426Z" level=info msg="StartContainer for \"5546e68444feaf1c646cf849a32bec28f60f404505411cd1722821bb74799bd9\" returns successfully" Jan 17 12:20:29.939068 containerd[1463]: time="2025-01-17T12:20:29.937022099Z" level=info msg="StopPodSandbox for \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\"" Jan 17 12:20:30.035077 kubelet[2673]: I0117 12:20:30.034583 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-82zdp" podStartSLOduration=40.034513858 podStartE2EDuration="40.034513858s" podCreationTimestamp="2025-01-17 12:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:29.505372934 +0000 UTC m=+54.744363119" watchObservedRunningTime="2025-01-17 12:20:30.034513858 +0000 UTC m=+55.273504111" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.031 [INFO][4684] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.032 [INFO][4684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" iface="eth0" netns="/var/run/netns/cni-3576dfcb-c79f-9663-2886-98e700656e2f" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.032 [INFO][4684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" iface="eth0" netns="/var/run/netns/cni-3576dfcb-c79f-9663-2886-98e700656e2f" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.032 [INFO][4684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" iface="eth0" netns="/var/run/netns/cni-3576dfcb-c79f-9663-2886-98e700656e2f" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.032 [INFO][4684] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.033 [INFO][4684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.089 [INFO][4691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.090 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.090 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.107 [WARNING][4691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.107 [INFO][4691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.127 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:30.132033 containerd[1463]: 2025-01-17 12:20:30.130 [INFO][4684] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:30.136604 containerd[1463]: time="2025-01-17T12:20:30.134341249Z" level=info msg="TearDown network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\" successfully" Jan 17 12:20:30.136604 containerd[1463]: time="2025-01-17T12:20:30.134388034Z" level=info msg="StopPodSandbox for \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\" returns successfully" Jan 17 12:20:30.140021 containerd[1463]: time="2025-01-17T12:20:30.139289433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xhsbk,Uid:e6399d16-d230-4697-8af3-6eb4630e54b6,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:20:30.140564 systemd[1]: run-netns-cni\x2d3576dfcb\x2dc79f\x2d9663\x2d2886\x2d98e700656e2f.mount: Deactivated successfully. Jan 17 12:20:30.240504 systemd[1]: Started sshd@17-10.128.0.67:22-139.178.89.65:48664.service - OpenSSH per-connection server daemon (139.178.89.65:48664). Jan 17 12:20:30.330449 kubelet[2673]: I0117 12:20:30.328856 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d4bd59598-xbgwr" podStartSLOduration=26.616962831 podStartE2EDuration="30.328782427s" podCreationTimestamp="2025-01-17 12:20:00 +0000 UTC" firstStartedPulling="2025-01-17 12:20:25.627033045 +0000 UTC m=+50.866023219" lastFinishedPulling="2025-01-17 12:20:29.338852653 +0000 UTC m=+54.577842815" observedRunningTime="2025-01-17 12:20:30.326589234 +0000 UTC m=+55.565579419" watchObservedRunningTime="2025-01-17 12:20:30.328782427 +0000 UTC m=+55.567772612" Jan 17 12:20:30.575863 sshd[4707]: Accepted publickey for core from 139.178.89.65 port 48664 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:20:30.576937 sshd[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:30.599342 systemd-networkd[1370]: calia54e94d20a1: Link UP Jan 17 12:20:30.599900 systemd-networkd[1370]: calia54e94d20a1: Gained carrier Jan 17 12:20:30.603221 systemd-logind[1441]: New session 8 of user core. Jan 17 12:20:30.606374 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.337 [INFO][4698] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0 calico-apiserver-d4bd59598- calico-apiserver e6399d16-d230-4697-8af3-6eb4630e54b6 840 0 2025-01-17 12:20:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d4bd59598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal calico-apiserver-d4bd59598-xhsbk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia54e94d20a1 [] []}} ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.338 [INFO][4698] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.443 [INFO][4712] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" HandleID="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.474 [INFO][4712] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" HandleID="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000357230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", "pod":"calico-apiserver-d4bd59598-xhsbk", "timestamp":"2025-01-17 12:20:30.443860016 +0000 UTC"}, Hostname:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.474 [INFO][4712] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.474 [INFO][4712] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.474 [INFO][4712] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal' Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.479 [INFO][4712] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.490 [INFO][4712] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.508 [INFO][4712] ipam/ipam.go 489: Trying affinity for 192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.513 [INFO][4712] ipam/ipam.go 155: Attempting to load block cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.521 [INFO][4712] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.53.128/26 host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.521 [INFO][4712] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.53.128/26 handle="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.529 [INFO][4712] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726 Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.543 [INFO][4712] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.53.128/26 handle="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.560 [INFO][4712] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.53.134/26] block=192.168.53.128/26 handle="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.560 [INFO][4712] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.53.134/26] handle="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" host="ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal" Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.561 [INFO][4712] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:30.662250 containerd[1463]: 2025-01-17 12:20:30.562 [INFO][4712] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.53.134/26] IPv6=[] ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" HandleID="k8s-pod-network.c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.663594 containerd[1463]: 2025-01-17 12:20:30.573 [INFO][4698] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6399d16-d230-4697-8af3-6eb4630e54b6", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-d4bd59598-xhsbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia54e94d20a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:30.663594 containerd[1463]: 2025-01-17 12:20:30.574 [INFO][4698] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.53.134/32] ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.663594 containerd[1463]: 2025-01-17 12:20:30.575 [INFO][4698] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia54e94d20a1 ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.663594 containerd[1463]: 2025-01-17 12:20:30.606 [INFO][4698] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.663594 containerd[1463]: 2025-01-17 12:20:30.608 [INFO][4698] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6399d16-d230-4697-8af3-6eb4630e54b6", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726", Pod:"calico-apiserver-d4bd59598-xhsbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia54e94d20a1", MAC:"66:ec:ea:98:29:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:30.663594 containerd[1463]: 2025-01-17 12:20:30.646 [INFO][4698] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726" Namespace="calico-apiserver" Pod="calico-apiserver-d4bd59598-xhsbk" WorkloadEndpoint="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:30.780562 containerd[1463]: time="2025-01-17T12:20:30.780166708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:20:30.780562 containerd[1463]: time="2025-01-17T12:20:30.780273300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:20:30.780562 containerd[1463]: time="2025-01-17T12:20:30.780294343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:30.780562 containerd[1463]: time="2025-01-17T12:20:30.780428957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:20:30.872842 systemd-networkd[1370]: calid37635dd651: Gained IPv6LL Jan 17 12:20:30.881591 systemd[1]: run-containerd-runc-k8s.io-c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726-runc.uz6ieU.mount: Deactivated successfully. Jan 17 12:20:30.896582 systemd[1]: Started cri-containerd-c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726.scope - libcontainer container c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726. Jan 17 12:20:30.958930 containerd[1463]: time="2025-01-17T12:20:30.958872535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:30.964816 containerd[1463]: time="2025-01-17T12:20:30.963204793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:20:30.966165 containerd[1463]: time="2025-01-17T12:20:30.965362570Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:30.975888 containerd[1463]: time="2025-01-17T12:20:30.975824774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:30.978921 containerd[1463]: time="2025-01-17T12:20:30.978462041Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.638935463s" Jan 17 12:20:30.978921 containerd[1463]: time="2025-01-17T12:20:30.978523934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:20:30.981045 containerd[1463]: time="2025-01-17T12:20:30.980919753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:20:30.986389 containerd[1463]: time="2025-01-17T12:20:30.986336982Z" level=info msg="CreateContainer within sandbox \"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:20:31.039158 containerd[1463]: time="2025-01-17T12:20:31.039092296Z" level=info msg="CreateContainer within sandbox \"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7e933677903911bdd7081b09ca750d7e42149225f1929ec428cf48e6ebbed6e6\"" Jan 17 12:20:31.040929 containerd[1463]: time="2025-01-17T12:20:31.040456408Z" level=info msg="StartContainer for \"7e933677903911bdd7081b09ca750d7e42149225f1929ec428cf48e6ebbed6e6\"" Jan 17 12:20:31.091272 sshd[4707]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:31.104029 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:20:31.104047 systemd[1]: sshd@17-10.128.0.67:22-139.178.89.65:48664.service: Deactivated successfully. Jan 17 12:20:31.111794 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:20:31.119395 systemd-logind[1441]: Removed session 8. Jan 17 12:20:31.141393 systemd[1]: Started cri-containerd-7e933677903911bdd7081b09ca750d7e42149225f1929ec428cf48e6ebbed6e6.scope - libcontainer container 7e933677903911bdd7081b09ca750d7e42149225f1929ec428cf48e6ebbed6e6. Jan 17 12:20:31.256504 containerd[1463]: time="2025-01-17T12:20:31.256443135Z" level=info msg="StartContainer for \"7e933677903911bdd7081b09ca750d7e42149225f1929ec428cf48e6ebbed6e6\" returns successfully" Jan 17 12:20:31.277756 containerd[1463]: time="2025-01-17T12:20:31.277694199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d4bd59598-xhsbk,Uid:e6399d16-d230-4697-8af3-6eb4630e54b6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726\"" Jan 17 12:20:31.291515 containerd[1463]: time="2025-01-17T12:20:31.291365623Z" level=info msg="CreateContainer within sandbox \"c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:20:31.328169 containerd[1463]: time="2025-01-17T12:20:31.325812762Z" level=info msg="CreateContainer within sandbox \"c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7af89558fd14fbadf03e95cd8a5223e2ffa2b14d4237cb41d72a243a8d608292\"" Jan 17 12:20:31.330155 containerd[1463]: time="2025-01-17T12:20:31.328437117Z" level=info msg="StartContainer for \"7af89558fd14fbadf03e95cd8a5223e2ffa2b14d4237cb41d72a243a8d608292\"" Jan 17 12:20:31.334810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1247152869.mount: Deactivated successfully. Jan 17 12:20:31.425121 systemd[1]: Started cri-containerd-7af89558fd14fbadf03e95cd8a5223e2ffa2b14d4237cb41d72a243a8d608292.scope - libcontainer container 7af89558fd14fbadf03e95cd8a5223e2ffa2b14d4237cb41d72a243a8d608292. Jan 17 12:20:31.569182 containerd[1463]: time="2025-01-17T12:20:31.569016898Z" level=info msg="StartContainer for \"7af89558fd14fbadf03e95cd8a5223e2ffa2b14d4237cb41d72a243a8d608292\" returns successfully" Jan 17 12:20:32.344515 systemd-networkd[1370]: calia54e94d20a1: Gained IPv6LL Jan 17 12:20:32.353739 kubelet[2673]: I0117 12:20:32.353191 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d4bd59598-xhsbk" podStartSLOduration=32.352050283 podStartE2EDuration="32.352050283s" podCreationTimestamp="2025-01-17 12:20:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:20:32.351572657 +0000 UTC m=+57.590562843" watchObservedRunningTime="2025-01-17 12:20:32.352050283 +0000 UTC m=+57.591040463" Jan 17 12:20:33.949028 containerd[1463]: time="2025-01-17T12:20:33.948102016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:33.950639 containerd[1463]: time="2025-01-17T12:20:33.950574715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:20:33.952952 containerd[1463]: time="2025-01-17T12:20:33.952884438Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:33.957753 containerd[1463]: time="2025-01-17T12:20:33.957679211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:33.960073 containerd[1463]: time="2025-01-17T12:20:33.959333678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.977936094s" Jan 17 12:20:33.960073 containerd[1463]: time="2025-01-17T12:20:33.959387916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:20:33.962507 containerd[1463]: time="2025-01-17T12:20:33.962389122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:20:33.994110 containerd[1463]: time="2025-01-17T12:20:33.993736490Z" level=info msg="CreateContainer within sandbox \"c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:20:34.032040 containerd[1463]: time="2025-01-17T12:20:34.029114409Z" level=info msg="CreateContainer within sandbox \"c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6f5d8d1b99351f092b598b1e08c63a00955368e1b96fd6285428afd5cf1629be\"" Jan 17 12:20:34.041010 containerd[1463]: time="2025-01-17T12:20:34.040942594Z" level=info msg="StartContainer for \"6f5d8d1b99351f092b598b1e08c63a00955368e1b96fd6285428afd5cf1629be\"" Jan 17 12:20:34.111373 systemd[1]: Started cri-containerd-6f5d8d1b99351f092b598b1e08c63a00955368e1b96fd6285428afd5cf1629be.scope - libcontainer container 6f5d8d1b99351f092b598b1e08c63a00955368e1b96fd6285428afd5cf1629be. Jan 17 12:20:34.182234 containerd[1463]: time="2025-01-17T12:20:34.182179239Z" level=info msg="StartContainer for \"6f5d8d1b99351f092b598b1e08c63a00955368e1b96fd6285428afd5cf1629be\" returns successfully" Jan 17 12:20:34.331845 kubelet[2673]: I0117 12:20:34.330824 2673 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:20:34.377170 kubelet[2673]: I0117 12:20:34.376999 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d855f4c89-2zqf4" podStartSLOduration=29.608535703 podStartE2EDuration="34.376939355s" podCreationTimestamp="2025-01-17 12:20:00 +0000 UTC" firstStartedPulling="2025-01-17 12:20:29.19295032 +0000 UTC m=+54.431940492" lastFinishedPulling="2025-01-17 12:20:33.961353964 +0000 UTC m=+59.200344144" observedRunningTime="2025-01-17 12:20:34.374614252 +0000 UTC m=+59.613604439" watchObservedRunningTime="2025-01-17 12:20:34.376939355 +0000 UTC m=+59.615929542" Jan 17 12:20:34.545855 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.53.128:123 Jan 17 12:20:34.548478 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.53.128:123 Jan 17 12:20:34.548478 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 8 vxlan.calico [fe80::6481:2ff:fe77:e%4]:123 Jan 17 12:20:34.548478 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 9 calibdcdd86e839 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:20:34.548478 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 10 caliea06cda453e [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:20:34.548478 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 11 cali3aca300d9cd [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:20:34.548478 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 12 cali821b0cf0ab3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:20:34.545988 ntpd[1428]: Listen normally on 8 vxlan.calico [fe80::6481:2ff:fe77:e%4]:123 Jan 17 12:20:34.550858 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 13 calid37635dd651 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:20:34.550858 ntpd[1428]: 17 Jan 12:20:34 ntpd[1428]: Listen normally on 14 calia54e94d20a1 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:20:34.546080 ntpd[1428]: Listen normally on 9 calibdcdd86e839 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:20:34.546174 ntpd[1428]: Listen normally on 10 caliea06cda453e [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:20:34.546241 ntpd[1428]: Listen normally on 11 cali3aca300d9cd [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:20:34.548271 ntpd[1428]: Listen normally on 12 cali821b0cf0ab3 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:20:34.549301 ntpd[1428]: Listen normally on 13 calid37635dd651 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:20:34.549378 ntpd[1428]: Listen normally on 14 calia54e94d20a1 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:20:34.944959 containerd[1463]: time="2025-01-17T12:20:34.944895840Z" level=info msg="StopPodSandbox for \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\"" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.115 [WARNING][4960] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0", GenerateName:"calico-kube-controllers-5d855f4c89-", Namespace:"calico-system", SelfLink:"", UID:"2cf0d785-6509-49da-ba0a-ba16afb63819", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d855f4c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6", Pod:"calico-kube-controllers-5d855f4c89-2zqf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid37635dd651", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.115 [INFO][4960] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.115 [INFO][4960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" iface="eth0" netns="" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.116 [INFO][4960] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.116 [INFO][4960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.178 [INFO][4966] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.180 [INFO][4966] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.180 [INFO][4966] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.196 [WARNING][4966] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.196 [INFO][4966] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.198 [INFO][4966] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:35.205612 containerd[1463]: 2025-01-17 12:20:35.201 [INFO][4960] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.205612 containerd[1463]: time="2025-01-17T12:20:35.204888717Z" level=info msg="TearDown network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\" successfully" Jan 17 12:20:35.205612 containerd[1463]: time="2025-01-17T12:20:35.204925755Z" level=info msg="StopPodSandbox for \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\" returns successfully" Jan 17 12:20:35.207908 containerd[1463]: time="2025-01-17T12:20:35.207605029Z" level=info msg="RemovePodSandbox for \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\"" Jan 17 12:20:35.207908 containerd[1463]: time="2025-01-17T12:20:35.207648330Z" level=info msg="Forcibly stopping sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\"" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.374 [WARNING][4984] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0", GenerateName:"calico-kube-controllers-5d855f4c89-", Namespace:"calico-system", SelfLink:"", UID:"2cf0d785-6509-49da-ba0a-ba16afb63819", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d855f4c89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"c477e9f4fe5d7fff4cece7354d3e76e050ade6bee5473e8b578b3a0efca13fc6", Pod:"calico-kube-controllers-5d855f4c89-2zqf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.53.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid37635dd651", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.375 [INFO][4984] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.375 [INFO][4984] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" iface="eth0" netns="" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.375 [INFO][4984] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.375 [INFO][4984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.430 [INFO][4996] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.430 [INFO][4996] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.430 [INFO][4996] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.443 [WARNING][4996] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.450 [INFO][4996] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" HandleID="k8s-pod-network.4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--kube--controllers--5d855f4c89--2zqf4-eth0" Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.453 [INFO][4996] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:35.461106 containerd[1463]: 2025-01-17 12:20:35.458 [INFO][4984] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301" Jan 17 12:20:35.461106 containerd[1463]: time="2025-01-17T12:20:35.460899139Z" level=info msg="TearDown network for sandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\" successfully" Jan 17 12:20:35.493064 containerd[1463]: time="2025-01-17T12:20:35.492980444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:35.493290 containerd[1463]: time="2025-01-17T12:20:35.493107404Z" level=info msg="RemovePodSandbox \"4bb9985be36c1a03ed815fb2e07d3844681004685b9b5c3c25cbf840206b6301\" returns successfully" Jan 17 12:20:35.494363 containerd[1463]: time="2025-01-17T12:20:35.494325886Z" level=info msg="StopPodSandbox for \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\"" Jan 17 12:20:35.498114 containerd[1463]: time="2025-01-17T12:20:35.495864581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:35.499245 containerd[1463]: time="2025-01-17T12:20:35.499107278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:20:35.501613 containerd[1463]: time="2025-01-17T12:20:35.501566884Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:35.508667 containerd[1463]: time="2025-01-17T12:20:35.508593558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:20:35.510059 containerd[1463]: time="2025-01-17T12:20:35.509867535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.547430001s" Jan 17 12:20:35.510059 containerd[1463]: time="2025-01-17T12:20:35.509994918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:20:35.518181 containerd[1463]: time="2025-01-17T12:20:35.517242501Z" level=info msg="CreateContainer within sandbox \"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:20:35.546313 containerd[1463]: time="2025-01-17T12:20:35.546027855Z" level=info msg="CreateContainer within sandbox \"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0a612dee6e78adef2ed6e5dbe23421e4cc4a187a0e7a4ed870b182599a81d38c\"" Jan 17 12:20:35.550530 containerd[1463]: time="2025-01-17T12:20:35.549859028Z" level=info msg="StartContainer for \"0a612dee6e78adef2ed6e5dbe23421e4cc4a187a0e7a4ed870b182599a81d38c\"" Jan 17 12:20:35.641393 systemd[1]: Started cri-containerd-0a612dee6e78adef2ed6e5dbe23421e4cc4a187a0e7a4ed870b182599a81d38c.scope - libcontainer container 0a612dee6e78adef2ed6e5dbe23421e4cc4a187a0e7a4ed870b182599a81d38c. Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.606 [WARNING][5015] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6399d16-d230-4697-8af3-6eb4630e54b6", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726", Pod:"calico-apiserver-d4bd59598-xhsbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia54e94d20a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.607 [INFO][5015] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.607 [INFO][5015] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" iface="eth0" netns="" Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.607 [INFO][5015] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.607 [INFO][5015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.675 [INFO][5034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.675 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.675 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.697 [WARNING][5034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.697 [INFO][5034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.702 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:35.709495 containerd[1463]: 2025-01-17 12:20:35.707 [INFO][5015] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.709495 containerd[1463]: time="2025-01-17T12:20:35.709293321Z" level=info msg="TearDown network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\" successfully" Jan 17 12:20:35.709495 containerd[1463]: time="2025-01-17T12:20:35.709337068Z" level=info msg="StopPodSandbox for \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\" returns successfully" Jan 17 12:20:35.710637 containerd[1463]: time="2025-01-17T12:20:35.710476922Z" level=info msg="RemovePodSandbox for \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\"" Jan 17 12:20:35.710637 containerd[1463]: time="2025-01-17T12:20:35.710525572Z" level=info msg="Forcibly stopping sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\"" Jan 17 12:20:35.721027 containerd[1463]: time="2025-01-17T12:20:35.720870124Z" level=info msg="StartContainer for \"0a612dee6e78adef2ed6e5dbe23421e4cc4a187a0e7a4ed870b182599a81d38c\" returns successfully" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.777 [WARNING][5072] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"e6399d16-d230-4697-8af3-6eb4630e54b6", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"c92fb1a5e18fad3576eed5acc9854a9ecc545300dfdeeabd6336d29ac28de726", Pod:"calico-apiserver-d4bd59598-xhsbk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia54e94d20a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.777 [INFO][5072] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.778 [INFO][5072] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" iface="eth0" netns="" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.778 [INFO][5072] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.778 [INFO][5072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.817 [INFO][5081] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.817 [INFO][5081] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.817 [INFO][5081] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.825 [WARNING][5081] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.825 [INFO][5081] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" HandleID="k8s-pod-network.4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xhsbk-eth0" Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.827 [INFO][5081] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:35.830413 containerd[1463]: 2025-01-17 12:20:35.829 [INFO][5072] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862" Jan 17 12:20:35.832340 containerd[1463]: time="2025-01-17T12:20:35.830452919Z" level=info msg="TearDown network for sandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\" successfully" Jan 17 12:20:35.836034 containerd[1463]: time="2025-01-17T12:20:35.835958834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:35.836477 containerd[1463]: time="2025-01-17T12:20:35.836256412Z" level=info msg="RemovePodSandbox \"4bf7c28364a35c10c71b077aadc531504000de52cec917c57dee9e092d784862\" returns successfully" Jan 17 12:20:35.838051 containerd[1463]: time="2025-01-17T12:20:35.837922457Z" level=info msg="StopPodSandbox for \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\"" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.887 [WARNING][5099] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a76c710f-94e5-4498-855a-6ad309450588", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760", Pod:"coredns-76f75df574-82zdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali821b0cf0ab3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.887 [INFO][5099] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.887 [INFO][5099] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" iface="eth0" netns="" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.887 [INFO][5099] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.887 [INFO][5099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.916 [INFO][5105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.916 [INFO][5105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.916 [INFO][5105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.927 [WARNING][5105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.927 [INFO][5105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.930 [INFO][5105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:35.933972 containerd[1463]: 2025-01-17 12:20:35.931 [INFO][5099] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:35.935009 containerd[1463]: time="2025-01-17T12:20:35.934059180Z" level=info msg="TearDown network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\" successfully" Jan 17 12:20:35.935009 containerd[1463]: time="2025-01-17T12:20:35.934116064Z" level=info msg="StopPodSandbox for \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\" returns successfully" Jan 17 12:20:35.938658 containerd[1463]: time="2025-01-17T12:20:35.937909337Z" level=info msg="RemovePodSandbox for \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\"" Jan 17 12:20:35.938658 containerd[1463]: time="2025-01-17T12:20:35.937964708Z" level=info msg="Forcibly stopping sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\"" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:35.997 [WARNING][5125] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a76c710f-94e5-4498-855a-6ad309450588", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"67c2ef7bdb4d45309194e336496ee3511a857b4de377819941fe80fe08de3760", Pod:"coredns-76f75df574-82zdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali821b0cf0ab3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:35.997 [INFO][5125] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:35.997 [INFO][5125] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" iface="eth0" netns="" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:35.997 [INFO][5125] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:35.997 [INFO][5125] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:36.033 [INFO][5131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:36.033 [INFO][5131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:36.033 [INFO][5131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:36.048 [WARNING][5131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:36.048 [INFO][5131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" HandleID="k8s-pod-network.245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--82zdp-eth0" Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:36.050 [INFO][5131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:36.053292 containerd[1463]: 2025-01-17 12:20:36.052 [INFO][5125] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a" Jan 17 12:20:36.054112 containerd[1463]: time="2025-01-17T12:20:36.053325881Z" level=info msg="TearDown network for sandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\" successfully" Jan 17 12:20:36.057887 containerd[1463]: time="2025-01-17T12:20:36.057821013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:36.058065 containerd[1463]: time="2025-01-17T12:20:36.057911041Z" level=info msg="RemovePodSandbox \"245a68a6256a7effa538a4d413ca0fab617e05edb2684dfd01c2f2b251a27c2a\" returns successfully" Jan 17 12:20:36.058656 containerd[1463]: time="2025-01-17T12:20:36.058618562Z" level=info msg="StopPodSandbox for \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\"" Jan 17 12:20:36.099208 kubelet[2673]: I0117 12:20:36.098894 2673 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:20:36.099208 kubelet[2673]: I0117 12:20:36.098948 2673 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:20:36.152612 systemd[1]: Started sshd@18-10.128.0.67:22-139.178.89.65:60702.service - OpenSSH per-connection server daemon (139.178.89.65:60702). Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.132 [WARNING][5150] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7822461b-3dc7-4498-bfeb-6f9db1652d5b", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210", Pod:"coredns-76f75df574-l2mgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aca300d9cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.132 [INFO][5150] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.132 [INFO][5150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" iface="eth0" netns="" Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.132 [INFO][5150] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.132 [INFO][5150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.180 [INFO][5156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.181 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.181 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.190 [WARNING][5156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.190 [INFO][5156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.194 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:36.197043 containerd[1463]: 2025-01-17 12:20:36.195 [INFO][5150] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.197759 containerd[1463]: time="2025-01-17T12:20:36.197729769Z" level=info msg="TearDown network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\" successfully" Jan 17 12:20:36.197856 containerd[1463]: time="2025-01-17T12:20:36.197841910Z" level=info msg="StopPodSandbox for \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\" returns successfully" Jan 17 12:20:36.198714 containerd[1463]: time="2025-01-17T12:20:36.198680064Z" level=info msg="RemovePodSandbox for \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\"" Jan 17 12:20:36.198840 containerd[1463]: time="2025-01-17T12:20:36.198723433Z" level=info msg="Forcibly stopping sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\"" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.251 [WARNING][5178] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7822461b-3dc7-4498-bfeb-6f9db1652d5b", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 19, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"6ab8d321e12e506aff853dec20f679a35161d6b08a8f33c3b0798a1cfe0d9210", Pod:"coredns-76f75df574-l2mgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.53.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3aca300d9cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.252 [INFO][5178] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.252 [INFO][5178] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" iface="eth0" netns="" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.252 [INFO][5178] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.252 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.288 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.288 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.288 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.295 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.296 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" HandleID="k8s-pod-network.160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-coredns--76f75df574--l2mgq-eth0" Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.297 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:36.301759 containerd[1463]: 2025-01-17 12:20:36.299 [INFO][5178] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35" Jan 17 12:20:36.301759 containerd[1463]: time="2025-01-17T12:20:36.301652735Z" level=info msg="TearDown network for sandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\" successfully" Jan 17 12:20:36.308019 containerd[1463]: time="2025-01-17T12:20:36.307653435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:36.308019 containerd[1463]: time="2025-01-17T12:20:36.307749611Z" level=info msg="RemovePodSandbox \"160342e756d60a03b9b6ed0e45e877db684c9a9b8398c2cc4cf19e42a492ac35\" returns successfully" Jan 17 12:20:36.310324 containerd[1463]: time="2025-01-17T12:20:36.310280341Z" level=info msg="StopPodSandbox for \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\"" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.399 [WARNING][5202] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c39613df-5b01-4ed2-aed8-82b1b3948bbf", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c", Pod:"csi-node-driver-4jsqk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea06cda453e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.400 [INFO][5202] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.400 [INFO][5202] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" iface="eth0" netns="" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.400 [INFO][5202] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.400 [INFO][5202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.449 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.449 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.450 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.464 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.464 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.468 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:36.471725 containerd[1463]: 2025-01-17 12:20:36.469 [INFO][5202] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.471725 containerd[1463]: time="2025-01-17T12:20:36.471716150Z" level=info msg="TearDown network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\" successfully" Jan 17 12:20:36.474029 containerd[1463]: time="2025-01-17T12:20:36.471750340Z" level=info msg="StopPodSandbox for \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\" returns successfully" Jan 17 12:20:36.474029 containerd[1463]: time="2025-01-17T12:20:36.472418813Z" level=info msg="RemovePodSandbox for \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\"" Jan 17 12:20:36.474029 containerd[1463]: time="2025-01-17T12:20:36.472463867Z" level=info msg="Forcibly stopping sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\"" Jan 17 12:20:36.476050 sshd[5161]: Accepted publickey for core from 139.178.89.65 port 60702 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:20:36.479846 sshd[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:36.496864 systemd-logind[1441]: New session 9 of user core. Jan 17 12:20:36.500392 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.534 [WARNING][5226] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c39613df-5b01-4ed2-aed8-82b1b3948bbf", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"913d18882195706beb9d25298ccf5e7cef902c2fb6cb72213bb2492d9677af5c", Pod:"csi-node-driver-4jsqk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.53.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea06cda453e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.534 [INFO][5226] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.534 [INFO][5226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" iface="eth0" netns="" Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.534 [INFO][5226] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.534 [INFO][5226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.563 [INFO][5233] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.563 [INFO][5233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.564 [INFO][5233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.571 [WARNING][5233] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.571 [INFO][5233] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" HandleID="k8s-pod-network.a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-csi--node--driver--4jsqk-eth0" Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.573 [INFO][5233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:36.576714 containerd[1463]: 2025-01-17 12:20:36.574 [INFO][5226] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822" Jan 17 12:20:36.576714 containerd[1463]: time="2025-01-17T12:20:36.575524328Z" level=info msg="TearDown network for sandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\" successfully" Jan 17 12:20:36.581347 containerd[1463]: time="2025-01-17T12:20:36.581242569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:36.581493 containerd[1463]: time="2025-01-17T12:20:36.581357441Z" level=info msg="RemovePodSandbox \"a846abcbf293a888ee5874d5f0481d02cb0cf5d927327e6bfebde9aadd266822\" returns successfully" Jan 17 12:20:36.582879 containerd[1463]: time="2025-01-17T12:20:36.582355037Z" level=info msg="StopPodSandbox for \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\"" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.634 [WARNING][5251] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b90efd-4f58-4814-b601-0f40e2f4b17f", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da", Pod:"calico-apiserver-d4bd59598-xbgwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdcdd86e839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.635 [INFO][5251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.636 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" iface="eth0" netns="" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.636 [INFO][5251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.636 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.683 [INFO][5257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.684 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.684 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.694 [WARNING][5257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.694 [INFO][5257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.696 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:36.700583 containerd[1463]: 2025-01-17 12:20:36.698 [INFO][5251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.702717 containerd[1463]: time="2025-01-17T12:20:36.701314116Z" level=info msg="TearDown network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\" successfully" Jan 17 12:20:36.702717 containerd[1463]: time="2025-01-17T12:20:36.701397814Z" level=info msg="StopPodSandbox for \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\" returns successfully" Jan 17 12:20:36.703865 containerd[1463]: time="2025-01-17T12:20:36.703401624Z" level=info msg="RemovePodSandbox for \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\"" Jan 17 12:20:36.703865 containerd[1463]: time="2025-01-17T12:20:36.703457911Z" level=info msg="Forcibly stopping sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\"" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.774 [WARNING][5283] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0", GenerateName:"calico-apiserver-d4bd59598-", Namespace:"calico-apiserver", SelfLink:"", UID:"20b90efd-4f58-4814-b601-0f40e2f4b17f", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 20, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d4bd59598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-0-4f04f9833f5f6047b7b2.c.flatcar-212911.internal", ContainerID:"3f7c3a2178de23fc3e9d7db120a2af84e24ea742c75025cb0f86a516f19b69da", Pod:"calico-apiserver-d4bd59598-xbgwr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.53.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibdcdd86e839", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.774 [INFO][5283] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.774 [INFO][5283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" iface="eth0" netns="" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.774 [INFO][5283] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.774 [INFO][5283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.805 [INFO][5290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.806 [INFO][5290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.806 [INFO][5290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.813 [WARNING][5290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.813 [INFO][5290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" HandleID="k8s-pod-network.de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Workload="ci--4081--3--0--4f04f9833f5f6047b7b2.c.flatcar--212911.internal-k8s-calico--apiserver--d4bd59598--xbgwr-eth0" Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.814 [INFO][5290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:20:36.817309 containerd[1463]: 2025-01-17 12:20:36.815 [INFO][5283] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a" Jan 17 12:20:36.818257 containerd[1463]: time="2025-01-17T12:20:36.817381216Z" level=info msg="TearDown network for sandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\" successfully" Jan 17 12:20:36.822635 containerd[1463]: time="2025-01-17T12:20:36.822577454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:20:36.822818 containerd[1463]: time="2025-01-17T12:20:36.822674121Z" level=info msg="RemovePodSandbox \"de4f8b59d9f133d69cbed27583cbe3c79d830a8e107bf734576c2f883efc234a\" returns successfully" Jan 17 12:20:36.834697 sshd[5161]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:36.840821 systemd[1]: sshd@18-10.128.0.67:22-139.178.89.65:60702.service: Deactivated successfully. Jan 17 12:20:36.843831 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:20:36.846175 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:20:36.848495 systemd-logind[1441]: Removed session 9. Jan 17 12:20:37.158250 kubelet[2673]: I0117 12:20:37.157621 2673 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-4jsqk" podStartSLOduration=27.298865142 podStartE2EDuration="37.15756332s" podCreationTimestamp="2025-01-17 12:20:00 +0000 UTC" firstStartedPulling="2025-01-17 12:20:25.652539486 +0000 UTC m=+50.891529659" lastFinishedPulling="2025-01-17 12:20:35.511237667 +0000 UTC m=+60.750227837" observedRunningTime="2025-01-17 12:20:36.397337854 +0000 UTC m=+61.636328039" watchObservedRunningTime="2025-01-17 12:20:37.15756332 +0000 UTC m=+62.396553493" Jan 17 12:20:41.892977 systemd[1]: Started sshd@19-10.128.0.67:22-139.178.89.65:53422.service - OpenSSH per-connection server daemon (139.178.89.65:53422). Jan 17 12:20:42.181971 sshd[5324]: Accepted publickey for core from 139.178.89.65 port 53422 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:20:42.183963 sshd[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:42.191957 systemd-logind[1441]: New session 10 of user core. Jan 17 12:20:42.197401 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:20:42.479781 sshd[5324]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:42.485987 systemd[1]: sshd@19-10.128.0.67:22-139.178.89.65:53422.service: Deactivated successfully. Jan 17 12:20:42.489012 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:20:42.490110 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:20:42.492051 systemd-logind[1441]: Removed session 10. Jan 17 12:20:42.538656 systemd[1]: Started sshd@20-10.128.0.67:22-139.178.89.65:53428.service - OpenSSH per-connection server daemon (139.178.89.65:53428). Jan 17 12:20:42.828893 sshd[5356]: Accepted publickey for core from 139.178.89.65 port 53428 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:20:42.830795 sshd[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:42.837390 systemd-logind[1441]: New session 11 of user core. Jan 17 12:20:42.843408 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:20:43.166422 sshd[5356]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:43.173108 systemd[1]: sshd@20-10.128.0.67:22-139.178.89.65:53428.service: Deactivated successfully. Jan 17 12:20:43.176186 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:20:43.177477 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:20:43.179455 systemd-logind[1441]: Removed session 11. Jan 17 12:20:43.219613 systemd[1]: Started sshd@21-10.128.0.67:22-139.178.89.65:53430.service - OpenSSH per-connection server daemon (139.178.89.65:53430). Jan 17 12:20:43.512905 sshd[5366]: Accepted publickey for core from 139.178.89.65 port 53430 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:20:43.515410 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:43.521332 systemd-logind[1441]: New session 12 of user core. Jan 17 12:20:43.527484 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:20:43.806263 sshd[5366]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:43.811521 systemd[1]: sshd@21-10.128.0.67:22-139.178.89.65:53430.service: Deactivated successfully. Jan 17 12:20:43.814971 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:20:43.817700 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:20:43.819530 systemd-logind[1441]: Removed session 12. Jan 17 12:20:44.929579 systemd[1]: Started sshd@22-10.128.0.67:22-51.178.141.222:56542.service - OpenSSH per-connection server daemon (51.178.141.222:56542). Jan 17 12:20:45.578353 sshd[5385]: Invalid user tg from 51.178.141.222 port 56542 Jan 17 12:20:45.694374 sshd[5385]: Received disconnect from 51.178.141.222 port 56542:11: Bye Bye [preauth] Jan 17 12:20:45.694374 sshd[5385]: Disconnected from invalid user tg 51.178.141.222 port 56542 [preauth] Jan 17 12:20:45.697453 systemd[1]: sshd@22-10.128.0.67:22-51.178.141.222:56542.service: Deactivated successfully. Jan 17 12:20:46.507616 systemd[1]: Started sshd@23-10.128.0.67:22-115.91.91.182:34622.service - OpenSSH per-connection server daemon (115.91.91.182:34622). Jan 17 12:20:47.633961 sshd[5390]: Received disconnect from 115.91.91.182 port 34622:11: Bye Bye [preauth] Jan 17 12:20:47.633961 sshd[5390]: Disconnected from authenticating user root 115.91.91.182 port 34622 [preauth] Jan 17 12:20:47.637231 systemd[1]: sshd@23-10.128.0.67:22-115.91.91.182:34622.service: Deactivated successfully. Jan 17 12:20:48.863058 systemd[1]: Started sshd@24-10.128.0.67:22-139.178.89.65:53442.service - OpenSSH per-connection server daemon (139.178.89.65:53442). Jan 17 12:20:49.159041 sshd[5399]: Accepted publickey for core from 139.178.89.65 port 53442 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:20:49.161059 sshd[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:49.166693 systemd-logind[1441]: New session 13 of user core. Jan 17 12:20:49.172348 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:20:49.454275 sshd[5399]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:49.459207 systemd[1]: sshd@24-10.128.0.67:22-139.178.89.65:53442.service: Deactivated successfully. Jan 17 12:20:49.462028 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:20:49.464398 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:20:49.465854 systemd-logind[1441]: Removed session 13. Jan 17 12:20:50.641593 systemd[1]: Started sshd@25-10.128.0.67:22-85.190.243.197:34954.service - OpenSSH per-connection server daemon (85.190.243.197:34954). Jan 17 12:20:54.510633 systemd[1]: Started sshd@26-10.128.0.67:22-139.178.89.65:53768.service - OpenSSH per-connection server daemon (139.178.89.65:53768). Jan 17 12:20:54.802940 sshd[5418]: Accepted publickey for core from 139.178.89.65 port 53768 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:20:54.804958 sshd[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:54.812472 systemd-logind[1441]: New session 14 of user core. Jan 17 12:20:54.817401 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:20:55.101676 sshd[5418]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:55.106879 systemd[1]: sshd@26-10.128.0.67:22-139.178.89.65:53768.service: Deactivated successfully. Jan 17 12:20:55.109718 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:20:55.112045 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:20:55.113686 systemd-logind[1441]: Removed session 14. Jan 17 12:20:57.040843 sshd[5412]: Connection closed by 85.190.243.197 port 34954 [preauth] Jan 17 12:20:57.043530 systemd[1]: sshd@25-10.128.0.67:22-85.190.243.197:34954.service: Deactivated successfully. Jan 17 12:21:00.163707 systemd[1]: Started sshd@27-10.128.0.67:22-139.178.89.65:53784.service - OpenSSH per-connection server daemon (139.178.89.65:53784). Jan 17 12:21:00.459993 sshd[5435]: Accepted publickey for core from 139.178.89.65 port 53784 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:00.462056 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:00.469608 systemd-logind[1441]: New session 15 of user core. Jan 17 12:21:00.475418 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:21:00.751671 sshd[5435]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:00.758750 systemd[1]: sshd@27-10.128.0.67:22-139.178.89.65:53784.service: Deactivated successfully. Jan 17 12:21:00.762279 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:21:00.763741 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:21:00.765456 systemd-logind[1441]: Removed session 15. Jan 17 12:21:05.808553 systemd[1]: Started sshd@28-10.128.0.67:22-139.178.89.65:41028.service - OpenSSH per-connection server daemon (139.178.89.65:41028). Jan 17 12:21:06.107257 sshd[5454]: Accepted publickey for core from 139.178.89.65 port 41028 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:06.110441 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:06.122705 systemd-logind[1441]: New session 16 of user core. Jan 17 12:21:06.129380 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:21:06.458436 sshd[5454]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:06.464988 systemd[1]: sshd@28-10.128.0.67:22-139.178.89.65:41028.service: Deactivated successfully. Jan 17 12:21:06.469723 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:21:06.473634 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:21:06.478116 systemd-logind[1441]: Removed session 16. Jan 17 12:21:06.518600 systemd[1]: Started sshd@29-10.128.0.67:22-139.178.89.65:41034.service - OpenSSH per-connection server daemon (139.178.89.65:41034). Jan 17 12:21:06.830923 sshd[5468]: Accepted publickey for core from 139.178.89.65 port 41034 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:06.831880 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:06.841686 systemd-logind[1441]: New session 17 of user core. Jan 17 12:21:06.846909 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:21:07.115381 systemd[1]: run-containerd-runc-k8s.io-ea56bf1dbc779c5a76ec8e889190a5b5070ff32d2acd475e7408068aef2910e1-runc.yeKkyc.mount: Deactivated successfully. Jan 17 12:21:07.359122 sshd[5468]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:07.366489 systemd[1]: sshd@29-10.128.0.67:22-139.178.89.65:41034.service: Deactivated successfully. Jan 17 12:21:07.372658 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:21:07.377205 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:21:07.379472 systemd-logind[1441]: Removed session 17. Jan 17 12:21:07.420655 systemd[1]: Started sshd@30-10.128.0.67:22-139.178.89.65:41040.service - OpenSSH per-connection server daemon (139.178.89.65:41040). Jan 17 12:21:07.732350 sshd[5500]: Accepted publickey for core from 139.178.89.65 port 41040 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:07.736034 sshd[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:07.746421 systemd-logind[1441]: New session 18 of user core. Jan 17 12:21:07.754354 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:21:10.432595 sshd[5500]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:10.442834 systemd[1]: sshd@30-10.128.0.67:22-139.178.89.65:41040.service: Deactivated successfully. Jan 17 12:21:10.448474 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:21:10.451354 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:21:10.456306 systemd-logind[1441]: Removed session 18. Jan 17 12:21:10.492562 systemd[1]: Started sshd@31-10.128.0.67:22-139.178.89.65:41046.service - OpenSSH per-connection server daemon (139.178.89.65:41046). Jan 17 12:21:10.794785 sshd[5539]: Accepted publickey for core from 139.178.89.65 port 41046 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:10.796802 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:10.803832 systemd-logind[1441]: New session 19 of user core. Jan 17 12:21:10.809393 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:21:11.276978 sshd[5539]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:11.283615 systemd[1]: sshd@31-10.128.0.67:22-139.178.89.65:41046.service: Deactivated successfully. Jan 17 12:21:11.290749 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:21:11.296678 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:21:11.299668 systemd-logind[1441]: Removed session 19. Jan 17 12:21:11.332636 systemd[1]: Started sshd@32-10.128.0.67:22-139.178.89.65:37846.service - OpenSSH per-connection server daemon (139.178.89.65:37846). Jan 17 12:21:11.628057 sshd[5553]: Accepted publickey for core from 139.178.89.65 port 37846 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:11.630403 sshd[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:11.639673 systemd-logind[1441]: New session 20 of user core. Jan 17 12:21:11.644632 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:21:11.926034 sshd[5553]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:11.931229 systemd[1]: sshd@32-10.128.0.67:22-139.178.89.65:37846.service: Deactivated successfully. Jan 17 12:21:11.934744 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:21:11.937786 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:21:11.940250 systemd-logind[1441]: Removed session 20. Jan 17 12:21:12.242297 systemd[1]: run-containerd-runc-k8s.io-6f5d8d1b99351f092b598b1e08c63a00955368e1b96fd6285428afd5cf1629be-runc.7faY4o.mount: Deactivated successfully. Jan 17 12:21:12.493654 systemd[1]: Started sshd@33-10.128.0.67:22-51.178.141.222:37780.service - OpenSSH per-connection server daemon (51.178.141.222:37780). Jan 17 12:21:13.134677 sshd[5584]: Invalid user teamspeak from 51.178.141.222 port 37780 Jan 17 12:21:13.245939 sshd[5584]: Received disconnect from 51.178.141.222 port 37780:11: Bye Bye [preauth] Jan 17 12:21:13.245939 sshd[5584]: Disconnected from invalid user teamspeak 51.178.141.222 port 37780 [preauth] Jan 17 12:21:13.249074 systemd[1]: sshd@33-10.128.0.67:22-51.178.141.222:37780.service: Deactivated successfully. Jan 17 12:21:16.985624 systemd[1]: Started sshd@34-10.128.0.67:22-139.178.89.65:37858.service - OpenSSH per-connection server daemon (139.178.89.65:37858). Jan 17 12:21:17.275346 sshd[5589]: Accepted publickey for core from 139.178.89.65 port 37858 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:17.277316 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:17.283064 systemd-logind[1441]: New session 21 of user core. Jan 17 12:21:17.291475 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:21:17.576731 sshd[5589]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:17.581494 systemd[1]: sshd@34-10.128.0.67:22-139.178.89.65:37858.service: Deactivated successfully. Jan 17 12:21:17.584620 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:21:17.587080 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:21:17.589008 systemd-logind[1441]: Removed session 21. Jan 17 12:21:17.957593 systemd[1]: Started sshd@35-10.128.0.67:22-115.91.91.182:44202.service - OpenSSH per-connection server daemon (115.91.91.182:44202). Jan 17 12:21:18.940963 sshd[5605]: Invalid user lucas from 115.91.91.182 port 44202 Jan 17 12:21:19.116336 sshd[5605]: Received disconnect from 115.91.91.182 port 44202:11: Bye Bye [preauth] Jan 17 12:21:19.116550 sshd[5605]: Disconnected from invalid user lucas 115.91.91.182 port 44202 [preauth] Jan 17 12:21:19.119352 systemd[1]: sshd@35-10.128.0.67:22-115.91.91.182:44202.service: Deactivated successfully. Jan 17 12:21:22.634645 systemd[1]: Started sshd@36-10.128.0.67:22-139.178.89.65:56222.service - OpenSSH per-connection server daemon (139.178.89.65:56222). Jan 17 12:21:22.924423 sshd[5612]: Accepted publickey for core from 139.178.89.65 port 56222 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:22.926426 sshd[5612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:22.933159 systemd-logind[1441]: New session 22 of user core. Jan 17 12:21:22.939361 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:21:23.219587 sshd[5612]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:23.227238 systemd[1]: sshd@36-10.128.0.67:22-139.178.89.65:56222.service: Deactivated successfully. Jan 17 12:21:23.231175 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:21:23.232416 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:21:23.234232 systemd-logind[1441]: Removed session 22. Jan 17 12:21:28.273626 systemd[1]: Started sshd@37-10.128.0.67:22-139.178.89.65:56232.service - OpenSSH per-connection server daemon (139.178.89.65:56232). Jan 17 12:21:28.567281 sshd[5624]: Accepted publickey for core from 139.178.89.65 port 56232 ssh2: RSA SHA256:S3BhiB3tnPS5YeWu+yzRqQzXy7Ocrd4fkF4b08A4xAQ Jan 17 12:21:28.569334 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:21:28.575929 systemd-logind[1441]: New session 23 of user core. Jan 17 12:21:28.582417 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:21:28.854616 sshd[5624]: pam_unix(sshd:session): session closed for user core Jan 17 12:21:28.860979 systemd[1]: sshd@37-10.128.0.67:22-139.178.89.65:56232.service: Deactivated successfully. Jan 17 12:21:28.865322 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:21:28.866610 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:21:28.868730 systemd-logind[1441]: Removed session 23. Jan 17 12:21:31.270582 systemd[1]: Started sshd@38-10.128.0.67:22-85.190.243.197:34990.service - OpenSSH per-connection server daemon (85.190.243.197:34990).