Oct 8 20:03:04.137028 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:03:04.137082 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:03:04.137102 kernel: BIOS-provided physical RAM map: Oct 8 20:03:04.137117 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Oct 8 20:03:04.137130 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Oct 8 20:03:04.137144 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Oct 8 20:03:04.137161 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Oct 8 20:03:04.137181 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Oct 8 20:03:04.137194 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Oct 8 20:03:04.137207 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Oct 8 20:03:04.137222 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Oct 8 20:03:04.137236 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Oct 8 20:03:04.137250 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Oct 8 20:03:04.137264 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Oct 8 20:03:04.137286 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Oct 8 20:03:04.137302 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Oct 8 20:03:04.137318 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Oct 8 20:03:04.137333 kernel: NX (Execute Disable) protection: active Oct 8 20:03:04.137350 kernel: APIC: Static calls initialized Oct 8 20:03:04.137365 kernel: efi: EFI v2.7 by EDK II Oct 8 20:03:04.137381 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Oct 8 20:03:04.137396 kernel: SMBIOS 2.4 present. Oct 8 20:03:04.137411 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 Oct 8 20:03:04.137425 kernel: Hypervisor detected: KVM Oct 8 20:03:04.137446 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:03:04.137461 kernel: kvm-clock: using sched offset of 11896612903 cycles Oct 8 20:03:04.137477 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:03:04.137495 kernel: tsc: Detected 2299.998 MHz processor Oct 8 20:03:04.137513 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:03:04.137531 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:03:04.137549 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Oct 8 20:03:04.137567 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Oct 8 20:03:04.137617 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:03:04.137641 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Oct 8 20:03:04.137656 kernel: Using GB pages for direct mapping Oct 8 20:03:04.137672 kernel: Secure boot disabled Oct 8 20:03:04.137686 kernel: ACPI: Early table checksum verification disabled Oct 8 20:03:04.137701 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Oct 8 20:03:04.137716 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Oct 8 20:03:04.137731 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Oct 8 20:03:04.137754 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Oct 8 20:03:04.137773 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Oct 8 20:03:04.137789 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Oct 8 20:03:04.137806 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Oct 8 20:03:04.137823 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Oct 8 20:03:04.137839 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Oct 8 20:03:04.137855 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Oct 8 20:03:04.137875 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Oct 8 20:03:04.137891 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Oct 8 20:03:04.137907 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Oct 8 20:03:04.137924 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Oct 8 20:03:04.137941 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Oct 8 20:03:04.137957 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Oct 8 20:03:04.137973 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Oct 8 20:03:04.137990 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Oct 8 20:03:04.138005 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Oct 8 20:03:04.138033 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Oct 8 20:03:04.138049 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 8 20:03:04.138066 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 8 20:03:04.138083 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 8 20:03:04.138099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Oct 8 20:03:04.138116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Oct 8 20:03:04.138133 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Oct 8 20:03:04.138150 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Oct 8 20:03:04.138167 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Oct 8 20:03:04.138188 kernel: Zone ranges: Oct 8 20:03:04.138205 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:03:04.138222 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 8 20:03:04.138239 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Oct 8 20:03:04.138255 kernel: Movable zone start for each node Oct 8 20:03:04.138272 kernel: Early memory node ranges Oct 8 20:03:04.138288 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Oct 8 20:03:04.138304 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Oct 8 20:03:04.138321 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Oct 8 20:03:04.138344 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Oct 8 20:03:04.138361 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Oct 8 20:03:04.138378 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Oct 8 20:03:04.138395 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:03:04.138412 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Oct 8 20:03:04.138429 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Oct 8 20:03:04.138446 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 8 20:03:04.138463 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Oct 8 20:03:04.138480 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 8 20:03:04.138496 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:03:04.138518 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:03:04.138535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:03:04.138552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:03:04.138569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:03:04.138599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:03:04.138616 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:03:04.138633 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:03:04.138651 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 8 20:03:04.138673 kernel: Booting paravirtualized kernel on KVM Oct 8 20:03:04.138691 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:03:04.138708 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:03:04.138726 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:03:04.138743 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:03:04.138759 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:03:04.138776 kernel: kvm-guest: PV spinlocks enabled Oct 8 20:03:04.138793 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 20:03:04.138813 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:03:04.138836 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:03:04.138854 kernel: random: crng init done Oct 8 20:03:04.138870 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 8 20:03:04.138888 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:03:04.138905 kernel: Fallback order for Node 0: 0 Oct 8 20:03:04.138922 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Oct 8 20:03:04.138936 kernel: Policy zone: Normal Oct 8 20:03:04.138952 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:03:04.138968 kernel: software IO TLB: area num 2. Oct 8 20:03:04.138991 kernel: Memory: 7513420K/7860584K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 346904K reserved, 0K cma-reserved) Oct 8 20:03:04.139008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:03:04.139034 kernel: Kernel/User page tables isolation: enabled Oct 8 20:03:04.139050 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:03:04.139066 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:03:04.139083 kernel: Dynamic Preempt: voluntary Oct 8 20:03:04.139099 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:03:04.139117 kernel: rcu: RCU event tracing is enabled. Oct 8 20:03:04.139158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:03:04.139176 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:03:04.139193 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:03:04.139216 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:03:04.139234 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:03:04.139251 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:03:04.139268 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:03:04.139286 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:03:04.139304 kernel: Console: colour dummy device 80x25 Oct 8 20:03:04.139325 kernel: printk: console [ttyS0] enabled Oct 8 20:03:04.139343 kernel: ACPI: Core revision 20230628 Oct 8 20:03:04.139360 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:03:04.139377 kernel: x2apic enabled Oct 8 20:03:04.139393 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:03:04.139410 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Oct 8 20:03:04.139428 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 8 20:03:04.139445 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Oct 8 20:03:04.139468 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Oct 8 20:03:04.139486 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Oct 8 20:03:04.139504 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:03:04.139522 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Oct 8 20:03:04.139541 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Oct 8 20:03:04.139560 kernel: Spectre V2 : Mitigation: IBRS Oct 8 20:03:04.139590 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:03:04.139616 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:03:04.139635 kernel: RETBleed: Mitigation: IBRS Oct 8 20:03:04.139662 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 20:03:04.139681 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Oct 8 20:03:04.139702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 20:03:04.139722 kernel: MDS: Mitigation: Clear CPU buffers Oct 8 20:03:04.139742 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 20:03:04.139762 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:03:04.139781 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:03:04.139801 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:03:04.139822 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:03:04.139847 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 8 20:03:04.139866 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:03:04.139887 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:03:04.139907 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:03:04.139928 kernel: landlock: Up and running. Oct 8 20:03:04.139948 kernel: SELinux: Initializing. Oct 8 20:03:04.139968 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.139987 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.140006 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Oct 8 20:03:04.140042 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:04.140060 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:04.140076 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:04.140096 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Oct 8 20:03:04.140114 kernel: signal: max sigframe size: 1776 Oct 8 20:03:04.140131 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:03:04.140149 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:03:04.140167 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 8 20:03:04.140184 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:03:04.140207 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:03:04.140224 kernel: .... node #0, CPUs: #1 Oct 8 20:03:04.140244 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 8 20:03:04.140263 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 8 20:03:04.140281 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:03:04.140298 kernel: smpboot: Max logical packages: 1 Oct 8 20:03:04.140316 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 8 20:03:04.140334 kernel: devtmpfs: initialized Oct 8 20:03:04.140356 kernel: x86/mm: Memory block size: 128MB Oct 8 20:03:04.140374 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Oct 8 20:03:04.140392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:03:04.140410 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:03:04.140429 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:03:04.140450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:03:04.140472 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:03:04.140491 kernel: audit: type=2000 audit(1728417782.618:1): state=initialized audit_enabled=0 res=1 Oct 8 20:03:04.140511 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:03:04.140535 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:03:04.140555 kernel: cpuidle: using governor menu Oct 8 20:03:04.140572 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:03:04.140615 kernel: dca service started, version 1.12.1 Oct 8 20:03:04.140632 kernel: PCI: Using configuration type 1 for base access Oct 8 20:03:04.140648 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:03:04.140665 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:03:04.140681 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:03:04.140697 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:03:04.140719 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:03:04.140736 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:03:04.140754 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:03:04.140771 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:03:04.140789 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:03:04.140807 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 8 20:03:04.140826 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:03:04.140841 kernel: ACPI: Interpreter enabled Oct 8 20:03:04.140858 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 20:03:04.140883 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:03:04.140900 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:03:04.140916 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 8 20:03:04.140933 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 8 20:03:04.140949 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:03:04.141275 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:03:04.141662 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 8 20:03:04.141888 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 8 20:03:04.141925 kernel: PCI host bridge to bus 0000:00 Oct 8 20:03:04.142169 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:03:04.142355 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:03:04.142527 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:03:04.142748 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Oct 8 20:03:04.142923 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:03:04.143136 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 8 20:03:04.143343 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Oct 8 20:03:04.143530 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 8 20:03:04.143729 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 8 20:03:04.143916 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Oct 8 20:03:04.144105 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Oct 8 20:03:04.144302 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Oct 8 20:03:04.144517 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 20:03:04.144735 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Oct 8 20:03:04.144929 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Oct 8 20:03:04.145138 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:03:04.145330 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 8 20:03:04.145523 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Oct 8 20:03:04.145556 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:03:04.145606 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:03:04.145624 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:03:04.145641 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:03:04.145657 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 8 20:03:04.145673 kernel: iommu: Default domain type: Translated Oct 8 20:03:04.145690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:03:04.145708 kernel: efivars: Registered efivars operations Oct 8 20:03:04.145725 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:03:04.145748 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:03:04.145766 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Oct 8 20:03:04.145783 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Oct 8 20:03:04.145800 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Oct 8 20:03:04.145817 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Oct 8 20:03:04.145833 kernel: vgaarb: loaded Oct 8 20:03:04.145851 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:03:04.145869 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:03:04.145887 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:03:04.145910 kernel: pnp: PnP ACPI init Oct 8 20:03:04.145928 kernel: pnp: PnP ACPI: found 7 devices Oct 8 20:03:04.145946 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:03:04.145964 kernel: NET: Registered PF_INET protocol family Oct 8 20:03:04.145981 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:03:04.146007 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 8 20:03:04.146026 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:03:04.146045 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:03:04.146063 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 8 20:03:04.146086 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 8 20:03:04.146104 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.146123 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.146140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:03:04.146158 kernel: NET: Registered PF_XDP protocol family Oct 8 20:03:04.146355 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:03:04.146517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:03:04.146694 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:03:04.146873 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Oct 8 20:03:04.147073 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 8 20:03:04.147096 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:03:04.147113 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 8 20:03:04.147134 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Oct 8 20:03:04.147155 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 8 20:03:04.147175 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 8 20:03:04.147196 kernel: clocksource: Switched to clocksource tsc Oct 8 20:03:04.147223 kernel: Initialise system trusted keyrings Oct 8 20:03:04.147242 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 8 20:03:04.147263 kernel: Key type asymmetric registered Oct 8 20:03:04.147284 kernel: Asymmetric key parser 'x509' registered Oct 8 20:03:04.147303 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:03:04.147324 kernel: io scheduler mq-deadline registered Oct 8 20:03:04.147345 kernel: io scheduler kyber registered Oct 8 20:03:04.147367 kernel: io scheduler bfq registered Oct 8 20:03:04.147389 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:03:04.147417 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 8 20:03:04.147673 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Oct 8 20:03:04.147703 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 8 20:03:04.147898 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Oct 8 20:03:04.147921 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 8 20:03:04.148120 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Oct 8 20:03:04.148146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:03:04.148167 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:03:04.148188 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 8 20:03:04.148215 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Oct 8 20:03:04.148237 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Oct 8 20:03:04.148436 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Oct 8 20:03:04.148464 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:03:04.148483 kernel: i8042: Warning: Keylock active Oct 8 20:03:04.148502 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:03:04.148522 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:03:04.148784 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 8 20:03:04.148971 kernel: rtc_cmos 00:00: registered as rtc0 Oct 8 20:03:04.149159 kernel: rtc_cmos 00:00: setting system clock to 2024-10-08T20:03:03 UTC (1728417783) Oct 8 20:03:04.149331 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 8 20:03:04.149358 kernel: intel_pstate: CPU model not supported Oct 8 20:03:04.149379 kernel: pstore: Using crash dump compression: deflate Oct 8 20:03:04.149400 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 20:03:04.149420 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:03:04.149440 kernel: Segment Routing with IPv6 Oct 8 20:03:04.149466 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:03:04.149487 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:03:04.149507 kernel: Key type dns_resolver registered Oct 8 20:03:04.149526 kernel: IPI shorthand broadcast: enabled Oct 8 20:03:04.149546 kernel: sched_clock: Marking stable (999005252, 162291233)->(1204059247, -42762762) Oct 8 20:03:04.149565 kernel: registered taskstats version 1 Oct 8 20:03:04.149606 kernel: Loading compiled-in X.509 certificates Oct 8 20:03:04.149622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:03:04.149636 kernel: Key type .fscrypt registered Oct 8 20:03:04.149658 kernel: Key type fscrypt-provisioning registered Oct 8 20:03:04.149674 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:03:04.149691 kernel: ima: No architecture policies found Oct 8 20:03:04.149706 kernel: clk: Disabling unused clocks Oct 8 20:03:04.149722 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:03:04.149737 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:03:04.149754 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:03:04.149771 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:03:04.149794 kernel: Run /init as init process Oct 8 20:03:04.149813 kernel: with arguments: Oct 8 20:03:04.149831 kernel: /init Oct 8 20:03:04.149849 kernel: with environment: Oct 8 20:03:04.149866 kernel: HOME=/ Oct 8 20:03:04.149883 kernel: TERM=linux Oct 8 20:03:04.149901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:03:04.149922 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:03:04.149949 systemd[1]: Detected virtualization google. Oct 8 20:03:04.149969 systemd[1]: Detected architecture x86-64. Oct 8 20:03:04.149986 systemd[1]: Running in initrd. Oct 8 20:03:04.150012 systemd[1]: No hostname configured, using default hostname. Oct 8 20:03:04.150030 systemd[1]: Hostname set to . Oct 8 20:03:04.150049 systemd[1]: Initializing machine ID from random generator. Oct 8 20:03:04.150067 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:03:04.150086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:04.150111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:04.150131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:03:04.150149 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:03:04.150168 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:03:04.150187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:03:04.150208 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:03:04.150226 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:03:04.150250 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:04.150270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:04.150312 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:03:04.150337 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:03:04.150356 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:03:04.150375 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:03:04.150398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:03:04.150418 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:03:04.150437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:03:04.150456 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:03:04.150477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:04.150496 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:04.150516 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:04.150535 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:03:04.150555 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:03:04.150641 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:03:04.150661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:03:04.150681 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:03:04.150700 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:03:04.150719 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:03:04.150790 systemd-journald[183]: Collecting audit messages is disabled. Oct 8 20:03:04.150844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:04.150864 systemd-journald[183]: Journal started Oct 8 20:03:04.150904 systemd-journald[183]: Runtime Journal (/run/log/journal/6c2d21dd810b4c02902fde501c722266) is 8.0M, max 148.7M, 140.7M free. Oct 8 20:03:04.156763 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:03:04.161629 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:03:04.163801 systemd-modules-load[184]: Inserted module 'overlay' Oct 8 20:03:04.164472 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:04.174011 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:03:04.183533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:04.198685 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:04.202129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:03:04.215706 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:03:04.217931 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:03:04.227867 kernel: Bridge firewalling registered Oct 8 20:03:04.223302 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 8 20:03:04.228418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:04.233398 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:03:04.236809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:03:04.253636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:03:04.265935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:04.274044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:04.281131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:04.285855 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:03:04.290006 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:04.304231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:03:04.342490 dracut-cmdline[218]: dracut-dracut-053 Oct 8 20:03:04.345129 systemd-resolved[216]: Positive Trust Anchors: Oct 8 20:03:04.345643 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:03:04.351739 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:03:04.345710 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:03:04.352900 systemd-resolved[216]: Defaulting to hostname 'linux'. Oct 8 20:03:04.355103 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:03:04.375919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:04.452625 kernel: SCSI subsystem initialized Oct 8 20:03:04.463639 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:03:04.476619 kernel: iscsi: registered transport (tcp) Oct 8 20:03:04.500643 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:03:04.500764 kernel: QLogic iSCSI HBA Driver Oct 8 20:03:04.554710 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:03:04.564831 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:03:04.612087 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:03:04.612209 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:03:04.612238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:03:04.659666 kernel: raid6: avx2x4 gen() 22902 MB/s Oct 8 20:03:04.676675 kernel: raid6: avx2x2 gen() 20607 MB/s Oct 8 20:03:04.694095 kernel: raid6: avx2x1 gen() 20529 MB/s Oct 8 20:03:04.694207 kernel: raid6: using algorithm avx2x4 gen() 22902 MB/s Oct 8 20:03:04.712110 kernel: raid6: .... xor() 5863 MB/s, rmw enabled Oct 8 20:03:04.712230 kernel: raid6: using avx2x2 recovery algorithm Oct 8 20:03:04.735623 kernel: xor: automatically using best checksumming function avx Oct 8 20:03:04.908623 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:03:04.923491 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:03:04.930867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:04.966642 systemd-udevd[401]: Using default interface naming scheme 'v255'. Oct 8 20:03:04.973863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:04.983227 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:03:05.013277 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Oct 8 20:03:05.050408 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:03:05.065827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:03:05.147295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:05.162392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:03:05.196288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:03:05.201822 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:03:05.207906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:05.222027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:03:05.244812 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:03:05.273843 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:03:05.288452 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:03:05.294281 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:03:05.294320 kernel: AES CTR mode by8 optimization enabled Oct 8 20:03:05.313616 kernel: scsi host0: Virtio SCSI HBA Oct 8 20:03:05.378230 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:03:05.395116 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Oct 8 20:03:05.378371 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:05.391210 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:05.392869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:03:05.393154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:05.394770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:05.402037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:05.458871 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Oct 8 20:03:05.459414 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Oct 8 20:03:05.459654 kernel: sd 0:0:1:0: [sda] Write Protect is off Oct 8 20:03:05.461820 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Oct 8 20:03:05.462144 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 20:03:05.462716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:05.474361 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:03:05.474417 kernel: GPT:17805311 != 25165823 Oct 8 20:03:05.474441 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:03:05.474465 kernel: GPT:17805311 != 25165823 Oct 8 20:03:05.474486 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:03:05.474508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:05.476615 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Oct 8 20:03:05.478812 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:05.513744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:05.531604 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (452) Oct 8 20:03:05.540027 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (448) Oct 8 20:03:05.559002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Oct 8 20:03:05.574959 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Oct 8 20:03:05.586904 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Oct 8 20:03:05.593400 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Oct 8 20:03:05.593647 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Oct 8 20:03:05.605961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:03:05.624436 disk-uuid[550]: Primary Header is updated. Oct 8 20:03:05.624436 disk-uuid[550]: Secondary Entries is updated. Oct 8 20:03:05.624436 disk-uuid[550]: Secondary Header is updated. Oct 8 20:03:05.638601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:05.665614 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:05.674630 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:06.676983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:06.677073 disk-uuid[551]: The operation has completed successfully. Oct 8 20:03:06.764037 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:03:06.764198 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:03:06.790968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:03:06.826483 sh[568]: Success Oct 8 20:03:06.849833 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 8 20:03:06.938711 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:03:06.947598 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:03:06.975312 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:03:07.026368 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:03:07.026455 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:07.026481 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:03:07.035793 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:03:07.042632 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:03:07.068637 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 20:03:07.071634 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:03:07.072555 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:03:07.079800 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:03:07.135912 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:07.135968 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:07.135985 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:03:07.151622 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:03:07.151703 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:03:07.164816 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:03:07.186776 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:07.201032 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:03:07.216859 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:03:07.308695 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:03:07.316887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:03:07.425279 systemd-networkd[750]: lo: Link UP Oct 8 20:03:07.425295 systemd-networkd[750]: lo: Gained carrier Oct 8 20:03:07.428093 ignition[667]: Ignition 2.19.0 Oct 8 20:03:07.428343 systemd-networkd[750]: Enumeration completed Oct 8 20:03:07.428104 ignition[667]: Stage: fetch-offline Oct 8 20:03:07.429107 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:07.428157 ignition[667]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.429114 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:03:07.428172 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.430608 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:03:07.428706 ignition[667]: parsed url from cmdline: "" Oct 8 20:03:07.430943 systemd-networkd[750]: eth0: Link UP Oct 8 20:03:07.428715 ignition[667]: no config URL provided Oct 8 20:03:07.430951 systemd-networkd[750]: eth0: Gained carrier Oct 8 20:03:07.428727 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:03:07.430964 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:07.428742 ignition[667]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:03:07.443689 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.66/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 8 20:03:07.428754 ignition[667]: failed to fetch config: resource requires networking Oct 8 20:03:07.452210 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:03:07.429076 ignition[667]: Ignition finished successfully Oct 8 20:03:07.459111 systemd[1]: Reached target network.target - Network. Oct 8 20:03:07.543382 ignition[759]: Ignition 2.19.0 Oct 8 20:03:07.492827 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:03:07.543391 ignition[759]: Stage: fetch Oct 8 20:03:07.555812 unknown[759]: fetched base config from "system" Oct 8 20:03:07.543634 ignition[759]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.555824 unknown[759]: fetched base config from "system" Oct 8 20:03:07.543648 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.555834 unknown[759]: fetched user config from "gcp" Oct 8 20:03:07.543798 ignition[759]: parsed url from cmdline: "" Oct 8 20:03:07.558917 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:03:07.543804 ignition[759]: no config URL provided Oct 8 20:03:07.570751 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:03:07.543810 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:03:07.613178 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:03:07.543822 ignition[759]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:03:07.644799 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:03:07.543844 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Oct 8 20:03:07.687127 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:03:07.548497 ignition[759]: GET result: OK Oct 8 20:03:07.705004 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:03:07.548653 ignition[759]: parsing config with SHA512: d2c2fd1c99feed08d3adff64e3786bf9f4984c6b8cd93115a5effacc25d0153e76ced746b7ecf56854884acaa058bf39975b5aeab15325fc7d423d05164b35b0 Oct 8 20:03:07.721783 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:03:07.556562 ignition[759]: fetch: fetch complete Oct 8 20:03:07.739787 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:03:07.556569 ignition[759]: fetch: fetch passed Oct 8 20:03:07.753790 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:03:07.556651 ignition[759]: Ignition finished successfully Oct 8 20:03:07.768776 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:03:07.597894 ignition[766]: Ignition 2.19.0 Oct 8 20:03:07.789829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:03:07.597903 ignition[766]: Stage: kargs Oct 8 20:03:07.598213 ignition[766]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.598228 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.599234 ignition[766]: kargs: kargs passed Oct 8 20:03:07.599290 ignition[766]: Ignition finished successfully Oct 8 20:03:07.673531 ignition[772]: Ignition 2.19.0 Oct 8 20:03:07.673540 ignition[772]: Stage: disks Oct 8 20:03:07.673786 ignition[772]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.673799 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.674812 ignition[772]: disks: disks passed Oct 8 20:03:07.674868 ignition[772]: Ignition finished successfully Oct 8 20:03:07.843173 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:03:08.025656 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:03:08.031778 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:03:08.168992 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:03:08.169939 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:03:08.170965 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:03:08.203758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:03:08.219890 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:03:08.242415 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:03:08.310905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Oct 8 20:03:08.310972 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:08.310998 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:08.311019 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:03:08.311040 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:03:08.311063 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:03:08.242528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:03:08.242600 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:03:08.323904 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:03:08.339748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:03:08.361878 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:03:08.508679 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:03:08.518820 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:03:08.529007 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:03:08.539789 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:03:08.676018 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:03:08.681878 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:03:08.721623 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:08.732914 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:03:08.744196 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:03:08.768291 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:03:08.786810 ignition[900]: INFO : Ignition 2.19.0 Oct 8 20:03:08.786810 ignition[900]: INFO : Stage: mount Oct 8 20:03:08.794999 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:08.794999 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:08.794999 ignition[900]: INFO : mount: mount passed Oct 8 20:03:08.794999 ignition[900]: INFO : Ignition finished successfully Oct 8 20:03:08.789559 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:03:08.814761 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:03:09.151796 systemd-networkd[750]: eth0: Gained IPv6LL Oct 8 20:03:09.175837 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:03:09.220609 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Oct 8 20:03:09.238128 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:09.238210 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:09.238252 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:03:09.259931 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:03:09.260020 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:03:09.263072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:03:09.302469 ignition[929]: INFO : Ignition 2.19.0 Oct 8 20:03:09.302469 ignition[929]: INFO : Stage: files Oct 8 20:03:09.316765 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:09.316765 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:09.316765 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:03:09.316765 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:03:09.315931 unknown[929]: wrote ssh authorized keys file for user: core Oct 8 20:03:09.417743 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:03:09.417743 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:03:09.502703 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:03:09.699637 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 8 20:03:09.991065 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:03:10.475165 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:10.475165 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:03:10.513773 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:03:10.513773 ignition[929]: INFO : files: files passed Oct 8 20:03:10.513773 ignition[929]: INFO : Ignition finished successfully Oct 8 20:03:10.479289 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:03:10.499872 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:03:10.530140 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:03:10.542303 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:03:10.721820 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:10.721820 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:10.542423 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:03:10.767851 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:10.628244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:03:10.654071 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:03:10.675815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:03:10.761243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:03:10.761383 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:03:10.779148 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:03:10.802895 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:03:10.823954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:03:10.828837 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:03:10.884603 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:03:10.903811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:03:10.952234 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:10.966962 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:10.989047 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:03:11.006919 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:03:11.007115 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:03:11.035996 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:03:11.054923 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:03:11.073961 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:03:11.093917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:03:11.113014 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:03:11.133936 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:03:11.153972 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:03:11.175092 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:03:11.194996 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:03:11.216996 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:03:11.234958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:03:11.235165 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:03:11.262028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:11.280954 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:11.301974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:03:11.302133 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:11.323908 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:03:11.324123 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:03:11.354057 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:03:11.354331 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:03:11.378077 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:03:11.444791 ignition[981]: INFO : Ignition 2.19.0 Oct 8 20:03:11.444791 ignition[981]: INFO : Stage: umount Oct 8 20:03:11.444791 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:11.444791 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:11.444791 ignition[981]: INFO : umount: umount passed Oct 8 20:03:11.444791 ignition[981]: INFO : Ignition finished successfully Oct 8 20:03:11.378273 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:03:11.402895 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:03:11.435762 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:03:11.436117 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:11.461926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:03:11.462922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:03:11.463154 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:11.512017 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:03:11.512204 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:03:11.542695 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:03:11.543718 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:03:11.543832 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:03:11.559474 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:03:11.559641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:03:11.580925 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:03:11.581048 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:03:11.602895 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:03:11.602957 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:03:11.618859 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:03:11.618944 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:03:11.638854 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:03:11.638936 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:03:11.658833 systemd[1]: Stopped target network.target - Network. Oct 8 20:03:11.676740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:03:11.676862 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:03:11.698820 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:03:11.715771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:03:11.719750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:11.735775 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:03:11.751794 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:03:11.766845 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:03:11.766937 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:03:11.785847 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:03:11.785931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:03:11.803840 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:03:11.803941 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:03:11.825853 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:03:11.825950 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:03:11.843854 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:03:11.843949 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:03:11.862091 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:03:11.864655 systemd-networkd[750]: eth0: DHCPv6 lease lost Oct 8 20:03:11.879959 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:03:11.899301 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:03:11.899475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:03:11.908628 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:03:11.908891 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:03:11.935751 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:03:11.935812 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:11.946850 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:03:11.959903 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:03:11.959987 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:03:11.985998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:03:12.443863 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 8 20:03:11.986064 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:12.016087 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:03:12.016190 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:12.039002 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:03:12.039103 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:12.059285 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:12.083121 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:03:12.083374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:12.109992 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:03:12.110099 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:12.125944 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:03:12.126017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:12.145886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:03:12.145999 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:03:12.172797 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:03:12.172942 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:03:12.202971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:03:12.203105 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:12.238861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:03:12.253770 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:03:12.253919 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:12.266968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:03:12.267067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:12.285516 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:03:12.285694 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:03:12.305151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:03:12.305289 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:03:12.326488 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:03:12.349850 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:03:12.396388 systemd[1]: Switching root. Oct 8 20:03:12.746777 systemd-journald[183]: Journal stopped Oct 8 20:03:04.137028 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 18:24:27 -00 2024 Oct 8 20:03:04.137082 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:03:04.137102 kernel: BIOS-provided physical RAM map: Oct 8 20:03:04.137117 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Oct 8 20:03:04.137130 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Oct 8 20:03:04.137144 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Oct 8 20:03:04.137161 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Oct 8 20:03:04.137181 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Oct 8 20:03:04.137194 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Oct 8 20:03:04.137207 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Oct 8 20:03:04.137222 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Oct 8 20:03:04.137236 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Oct 8 20:03:04.137250 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Oct 8 20:03:04.137264 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Oct 8 20:03:04.137286 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Oct 8 20:03:04.137302 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Oct 8 20:03:04.137318 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Oct 8 20:03:04.137333 kernel: NX (Execute Disable) protection: active Oct 8 20:03:04.137350 kernel: APIC: Static calls initialized Oct 8 20:03:04.137365 kernel: efi: EFI v2.7 by EDK II Oct 8 20:03:04.137381 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Oct 8 20:03:04.137396 kernel: SMBIOS 2.4 present. Oct 8 20:03:04.137411 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 Oct 8 20:03:04.137425 kernel: Hypervisor detected: KVM Oct 8 20:03:04.137446 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 8 20:03:04.137461 kernel: kvm-clock: using sched offset of 11896612903 cycles Oct 8 20:03:04.137477 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 8 20:03:04.137495 kernel: tsc: Detected 2299.998 MHz processor Oct 8 20:03:04.137513 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 8 20:03:04.137531 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 8 20:03:04.137549 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Oct 8 20:03:04.137567 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Oct 8 20:03:04.137617 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 8 20:03:04.137641 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Oct 8 20:03:04.137656 kernel: Using GB pages for direct mapping Oct 8 20:03:04.137672 kernel: Secure boot disabled Oct 8 20:03:04.137686 kernel: ACPI: Early table checksum verification disabled Oct 8 20:03:04.137701 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Oct 8 20:03:04.137716 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Oct 8 20:03:04.137731 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Oct 8 20:03:04.137754 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Oct 8 20:03:04.137773 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Oct 8 20:03:04.137789 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Oct 8 20:03:04.137806 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Oct 8 20:03:04.137823 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Oct 8 20:03:04.137839 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Oct 8 20:03:04.137855 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Oct 8 20:03:04.137875 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Oct 8 20:03:04.137891 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Oct 8 20:03:04.137907 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Oct 8 20:03:04.137924 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Oct 8 20:03:04.137941 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Oct 8 20:03:04.137957 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Oct 8 20:03:04.137973 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Oct 8 20:03:04.137990 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Oct 8 20:03:04.138005 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Oct 8 20:03:04.138033 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Oct 8 20:03:04.138049 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 8 20:03:04.138066 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 8 20:03:04.138083 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 8 20:03:04.138099 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Oct 8 20:03:04.138116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Oct 8 20:03:04.138133 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Oct 8 20:03:04.138150 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Oct 8 20:03:04.138167 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Oct 8 20:03:04.138188 kernel: Zone ranges: Oct 8 20:03:04.138205 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 8 20:03:04.138222 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 8 20:03:04.138239 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Oct 8 20:03:04.138255 kernel: Movable zone start for each node Oct 8 20:03:04.138272 kernel: Early memory node ranges Oct 8 20:03:04.138288 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Oct 8 20:03:04.138304 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Oct 8 20:03:04.138321 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Oct 8 20:03:04.138344 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Oct 8 20:03:04.138361 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Oct 8 20:03:04.138378 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Oct 8 20:03:04.138395 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 8 20:03:04.138412 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Oct 8 20:03:04.138429 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Oct 8 20:03:04.138446 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 8 20:03:04.138463 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Oct 8 20:03:04.138480 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 8 20:03:04.138496 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 8 20:03:04.138518 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 8 20:03:04.138535 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 8 20:03:04.138552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 8 20:03:04.138569 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 8 20:03:04.138599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 8 20:03:04.138616 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 8 20:03:04.138633 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 8 20:03:04.138651 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 8 20:03:04.138673 kernel: Booting paravirtualized kernel on KVM Oct 8 20:03:04.138691 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 8 20:03:04.138708 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 8 20:03:04.138726 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 8 20:03:04.138743 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 8 20:03:04.138759 kernel: pcpu-alloc: [0] 0 1 Oct 8 20:03:04.138776 kernel: kvm-guest: PV spinlocks enabled Oct 8 20:03:04.138793 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 8 20:03:04.138813 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:03:04.138836 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:03:04.138854 kernel: random: crng init done Oct 8 20:03:04.138870 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 8 20:03:04.138888 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:03:04.138905 kernel: Fallback order for Node 0: 0 Oct 8 20:03:04.138922 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Oct 8 20:03:04.138936 kernel: Policy zone: Normal Oct 8 20:03:04.138952 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:03:04.138968 kernel: software IO TLB: area num 2. Oct 8 20:03:04.138991 kernel: Memory: 7513420K/7860584K available (12288K kernel code, 2305K rwdata, 22716K rodata, 42828K init, 2360K bss, 346904K reserved, 0K cma-reserved) Oct 8 20:03:04.139008 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:03:04.139034 kernel: Kernel/User page tables isolation: enabled Oct 8 20:03:04.139050 kernel: ftrace: allocating 37784 entries in 148 pages Oct 8 20:03:04.139066 kernel: ftrace: allocated 148 pages with 3 groups Oct 8 20:03:04.139083 kernel: Dynamic Preempt: voluntary Oct 8 20:03:04.139099 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:03:04.139117 kernel: rcu: RCU event tracing is enabled. Oct 8 20:03:04.139158 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:03:04.139176 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:03:04.139193 kernel: Rude variant of Tasks RCU enabled. Oct 8 20:03:04.139216 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:03:04.139234 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:03:04.139251 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:03:04.139268 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 8 20:03:04.139286 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:03:04.139304 kernel: Console: colour dummy device 80x25 Oct 8 20:03:04.139325 kernel: printk: console [ttyS0] enabled Oct 8 20:03:04.139343 kernel: ACPI: Core revision 20230628 Oct 8 20:03:04.139360 kernel: APIC: Switch to symmetric I/O mode setup Oct 8 20:03:04.139377 kernel: x2apic enabled Oct 8 20:03:04.139393 kernel: APIC: Switched APIC routing to: physical x2apic Oct 8 20:03:04.139410 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Oct 8 20:03:04.139428 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 8 20:03:04.139445 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Oct 8 20:03:04.139468 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Oct 8 20:03:04.139486 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Oct 8 20:03:04.139504 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 8 20:03:04.139522 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Oct 8 20:03:04.139541 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Oct 8 20:03:04.139560 kernel: Spectre V2 : Mitigation: IBRS Oct 8 20:03:04.139590 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 8 20:03:04.139616 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 8 20:03:04.139635 kernel: RETBleed: Mitigation: IBRS Oct 8 20:03:04.139662 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 8 20:03:04.139681 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Oct 8 20:03:04.139702 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 8 20:03:04.139722 kernel: MDS: Mitigation: Clear CPU buffers Oct 8 20:03:04.139742 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 8 20:03:04.139762 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 8 20:03:04.139781 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 8 20:03:04.139801 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 8 20:03:04.139822 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 8 20:03:04.139847 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 8 20:03:04.139866 kernel: Freeing SMP alternatives memory: 32K Oct 8 20:03:04.139887 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:03:04.139907 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 20:03:04.139928 kernel: landlock: Up and running. Oct 8 20:03:04.139948 kernel: SELinux: Initializing. Oct 8 20:03:04.139968 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.139987 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.140006 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Oct 8 20:03:04.140042 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:04.140060 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:04.140076 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:03:04.140096 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Oct 8 20:03:04.140114 kernel: signal: max sigframe size: 1776 Oct 8 20:03:04.140131 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:03:04.140149 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:03:04.140167 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 8 20:03:04.140184 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:03:04.140207 kernel: smpboot: x86: Booting SMP configuration: Oct 8 20:03:04.140224 kernel: .... node #0, CPUs: #1 Oct 8 20:03:04.140244 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 8 20:03:04.140263 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 8 20:03:04.140281 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:03:04.140298 kernel: smpboot: Max logical packages: 1 Oct 8 20:03:04.140316 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 8 20:03:04.140334 kernel: devtmpfs: initialized Oct 8 20:03:04.140356 kernel: x86/mm: Memory block size: 128MB Oct 8 20:03:04.140374 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Oct 8 20:03:04.140392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:03:04.140410 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:03:04.140429 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:03:04.140450 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:03:04.140472 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:03:04.140491 kernel: audit: type=2000 audit(1728417782.618:1): state=initialized audit_enabled=0 res=1 Oct 8 20:03:04.140511 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:03:04.140535 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 8 20:03:04.140555 kernel: cpuidle: using governor menu Oct 8 20:03:04.140572 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:03:04.140615 kernel: dca service started, version 1.12.1 Oct 8 20:03:04.140632 kernel: PCI: Using configuration type 1 for base access Oct 8 20:03:04.140648 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 8 20:03:04.140665 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:03:04.140681 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:03:04.140697 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:03:04.140719 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:03:04.140736 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:03:04.140754 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:03:04.140771 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:03:04.140789 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:03:04.140807 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 8 20:03:04.140826 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 8 20:03:04.140841 kernel: ACPI: Interpreter enabled Oct 8 20:03:04.140858 kernel: ACPI: PM: (supports S0 S3 S5) Oct 8 20:03:04.140883 kernel: ACPI: Using IOAPIC for interrupt routing Oct 8 20:03:04.140900 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 8 20:03:04.140916 kernel: PCI: Ignoring E820 reservations for host bridge windows Oct 8 20:03:04.140933 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 8 20:03:04.140949 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:03:04.141275 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:03:04.141662 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Oct 8 20:03:04.141888 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Oct 8 20:03:04.141925 kernel: PCI host bridge to bus 0000:00 Oct 8 20:03:04.142169 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 8 20:03:04.142355 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 8 20:03:04.142527 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 8 20:03:04.142748 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Oct 8 20:03:04.142923 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:03:04.143136 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 8 20:03:04.143343 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Oct 8 20:03:04.143530 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 8 20:03:04.143729 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 8 20:03:04.143916 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Oct 8 20:03:04.144105 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Oct 8 20:03:04.144302 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Oct 8 20:03:04.144517 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 8 20:03:04.144735 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Oct 8 20:03:04.144929 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Oct 8 20:03:04.145138 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Oct 8 20:03:04.145330 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 8 20:03:04.145523 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Oct 8 20:03:04.145556 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 8 20:03:04.145606 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 8 20:03:04.145624 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 8 20:03:04.145641 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 8 20:03:04.145657 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 8 20:03:04.145673 kernel: iommu: Default domain type: Translated Oct 8 20:03:04.145690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 8 20:03:04.145708 kernel: efivars: Registered efivars operations Oct 8 20:03:04.145725 kernel: PCI: Using ACPI for IRQ routing Oct 8 20:03:04.145748 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 8 20:03:04.145766 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Oct 8 20:03:04.145783 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Oct 8 20:03:04.145800 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Oct 8 20:03:04.145817 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Oct 8 20:03:04.145833 kernel: vgaarb: loaded Oct 8 20:03:04.145851 kernel: clocksource: Switched to clocksource kvm-clock Oct 8 20:03:04.145869 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:03:04.145887 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:03:04.145910 kernel: pnp: PnP ACPI init Oct 8 20:03:04.145928 kernel: pnp: PnP ACPI: found 7 devices Oct 8 20:03:04.145946 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 8 20:03:04.145964 kernel: NET: Registered PF_INET protocol family Oct 8 20:03:04.145981 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 8 20:03:04.146007 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 8 20:03:04.146026 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:03:04.146045 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:03:04.146063 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Oct 8 20:03:04.146086 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 8 20:03:04.146104 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.146123 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 8 20:03:04.146140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:03:04.146158 kernel: NET: Registered PF_XDP protocol family Oct 8 20:03:04.146355 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 8 20:03:04.146517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 8 20:03:04.146694 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 8 20:03:04.146873 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Oct 8 20:03:04.147073 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 8 20:03:04.147096 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:03:04.147113 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 8 20:03:04.147134 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Oct 8 20:03:04.147155 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 8 20:03:04.147175 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 8 20:03:04.147196 kernel: clocksource: Switched to clocksource tsc Oct 8 20:03:04.147223 kernel: Initialise system trusted keyrings Oct 8 20:03:04.147242 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 8 20:03:04.147263 kernel: Key type asymmetric registered Oct 8 20:03:04.147284 kernel: Asymmetric key parser 'x509' registered Oct 8 20:03:04.147303 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 8 20:03:04.147324 kernel: io scheduler mq-deadline registered Oct 8 20:03:04.147345 kernel: io scheduler kyber registered Oct 8 20:03:04.147367 kernel: io scheduler bfq registered Oct 8 20:03:04.147389 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 8 20:03:04.147417 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 8 20:03:04.147673 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Oct 8 20:03:04.147703 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 8 20:03:04.147898 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Oct 8 20:03:04.147921 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 8 20:03:04.148120 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Oct 8 20:03:04.148146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:03:04.148167 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 8 20:03:04.148188 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 8 20:03:04.148215 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Oct 8 20:03:04.148237 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Oct 8 20:03:04.148436 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Oct 8 20:03:04.148464 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 8 20:03:04.148483 kernel: i8042: Warning: Keylock active Oct 8 20:03:04.148502 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 8 20:03:04.148522 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 8 20:03:04.148784 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 8 20:03:04.148971 kernel: rtc_cmos 00:00: registered as rtc0 Oct 8 20:03:04.149159 kernel: rtc_cmos 00:00: setting system clock to 2024-10-08T20:03:03 UTC (1728417783) Oct 8 20:03:04.149331 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 8 20:03:04.149358 kernel: intel_pstate: CPU model not supported Oct 8 20:03:04.149379 kernel: pstore: Using crash dump compression: deflate Oct 8 20:03:04.149400 kernel: pstore: Registered efi_pstore as persistent store backend Oct 8 20:03:04.149420 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:03:04.149440 kernel: Segment Routing with IPv6 Oct 8 20:03:04.149466 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:03:04.149487 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:03:04.149507 kernel: Key type dns_resolver registered Oct 8 20:03:04.149526 kernel: IPI shorthand broadcast: enabled Oct 8 20:03:04.149546 kernel: sched_clock: Marking stable (999005252, 162291233)->(1204059247, -42762762) Oct 8 20:03:04.149565 kernel: registered taskstats version 1 Oct 8 20:03:04.149606 kernel: Loading compiled-in X.509 certificates Oct 8 20:03:04.149622 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 14ce23fc5070d0471461f1dd6e298a5588e7ba8f' Oct 8 20:03:04.149636 kernel: Key type .fscrypt registered Oct 8 20:03:04.149658 kernel: Key type fscrypt-provisioning registered Oct 8 20:03:04.149674 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:03:04.149691 kernel: ima: No architecture policies found Oct 8 20:03:04.149706 kernel: clk: Disabling unused clocks Oct 8 20:03:04.149722 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 8 20:03:04.149737 kernel: Freeing unused kernel image (initmem) memory: 42828K Oct 8 20:03:04.149754 kernel: Write protecting the kernel read-only data: 36864k Oct 8 20:03:04.149771 kernel: Freeing unused kernel image (rodata/data gap) memory: 1860K Oct 8 20:03:04.149794 kernel: Run /init as init process Oct 8 20:03:04.149813 kernel: with arguments: Oct 8 20:03:04.149831 kernel: /init Oct 8 20:03:04.149849 kernel: with environment: Oct 8 20:03:04.149866 kernel: HOME=/ Oct 8 20:03:04.149883 kernel: TERM=linux Oct 8 20:03:04.149901 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:03:04.149922 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:03:04.149949 systemd[1]: Detected virtualization google. Oct 8 20:03:04.149969 systemd[1]: Detected architecture x86-64. Oct 8 20:03:04.149986 systemd[1]: Running in initrd. Oct 8 20:03:04.150012 systemd[1]: No hostname configured, using default hostname. Oct 8 20:03:04.150030 systemd[1]: Hostname set to . Oct 8 20:03:04.150049 systemd[1]: Initializing machine ID from random generator. Oct 8 20:03:04.150067 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:03:04.150086 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:04.150111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:04.150131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:03:04.150149 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:03:04.150168 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:03:04.150187 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:03:04.150208 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:03:04.150226 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:03:04.150250 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:04.150270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:04.150312 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:03:04.150337 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:03:04.150356 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:03:04.150375 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:03:04.150398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:03:04.150418 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:03:04.150437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:03:04.150456 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:03:04.150477 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:04.150496 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:04.150516 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:04.150535 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:03:04.150555 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:03:04.150641 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:03:04.150661 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:03:04.150681 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:03:04.150700 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:03:04.150719 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:03:04.150790 systemd-journald[183]: Collecting audit messages is disabled. Oct 8 20:03:04.150844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:04.150864 systemd-journald[183]: Journal started Oct 8 20:03:04.150904 systemd-journald[183]: Runtime Journal (/run/log/journal/6c2d21dd810b4c02902fde501c722266) is 8.0M, max 148.7M, 140.7M free. Oct 8 20:03:04.156763 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:03:04.161629 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:03:04.163801 systemd-modules-load[184]: Inserted module 'overlay' Oct 8 20:03:04.164472 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:04.174011 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:03:04.183533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:04.198685 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:04.202129 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:03:04.215706 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:03:04.217931 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:03:04.227867 kernel: Bridge firewalling registered Oct 8 20:03:04.223302 systemd-modules-load[184]: Inserted module 'br_netfilter' Oct 8 20:03:04.228418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:04.233398 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:03:04.236809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:03:04.253636 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:03:04.265935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:04.274044 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:04.281131 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:04.285855 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:03:04.290006 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:04.304231 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:03:04.342490 dracut-cmdline[218]: dracut-dracut-053 Oct 8 20:03:04.345129 systemd-resolved[216]: Positive Trust Anchors: Oct 8 20:03:04.345643 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:03:04.351739 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=ed527eaf992abc270af9987554566193214d123941456fd3066b47855e5178a5 Oct 8 20:03:04.345710 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:03:04.352900 systemd-resolved[216]: Defaulting to hostname 'linux'. Oct 8 20:03:04.355103 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:03:04.375919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:04.452625 kernel: SCSI subsystem initialized Oct 8 20:03:04.463639 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:03:04.476619 kernel: iscsi: registered transport (tcp) Oct 8 20:03:04.500643 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:03:04.500764 kernel: QLogic iSCSI HBA Driver Oct 8 20:03:04.554710 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:03:04.564831 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:03:04.612087 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:03:04.612209 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:03:04.612238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:03:04.659666 kernel: raid6: avx2x4 gen() 22902 MB/s Oct 8 20:03:04.676675 kernel: raid6: avx2x2 gen() 20607 MB/s Oct 8 20:03:04.694095 kernel: raid6: avx2x1 gen() 20529 MB/s Oct 8 20:03:04.694207 kernel: raid6: using algorithm avx2x4 gen() 22902 MB/s Oct 8 20:03:04.712110 kernel: raid6: .... xor() 5863 MB/s, rmw enabled Oct 8 20:03:04.712230 kernel: raid6: using avx2x2 recovery algorithm Oct 8 20:03:04.735623 kernel: xor: automatically using best checksumming function avx Oct 8 20:03:04.908623 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:03:04.923491 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:03:04.930867 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:04.966642 systemd-udevd[401]: Using default interface naming scheme 'v255'. Oct 8 20:03:04.973863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:04.983227 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:03:05.013277 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Oct 8 20:03:05.050408 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:03:05.065827 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:03:05.147295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:05.162392 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:03:05.196288 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:03:05.201822 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:03:05.207906 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:05.222027 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:03:05.244812 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:03:05.273843 kernel: cryptd: max_cpu_qlen set to 1000 Oct 8 20:03:05.288452 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:03:05.294281 kernel: AVX2 version of gcm_enc/dec engaged. Oct 8 20:03:05.294320 kernel: AES CTR mode by8 optimization enabled Oct 8 20:03:05.313616 kernel: scsi host0: Virtio SCSI HBA Oct 8 20:03:05.378230 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:03:05.395116 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Oct 8 20:03:05.378371 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:05.391210 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:05.392869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:03:05.393154 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:05.394770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:05.402037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:05.458871 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Oct 8 20:03:05.459414 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Oct 8 20:03:05.459654 kernel: sd 0:0:1:0: [sda] Write Protect is off Oct 8 20:03:05.461820 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Oct 8 20:03:05.462144 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 20:03:05.462716 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:05.474361 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:03:05.474417 kernel: GPT:17805311 != 25165823 Oct 8 20:03:05.474441 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:03:05.474465 kernel: GPT:17805311 != 25165823 Oct 8 20:03:05.474486 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:03:05.474508 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:05.476615 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Oct 8 20:03:05.478812 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:03:05.513744 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:05.531604 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (452) Oct 8 20:03:05.540027 kernel: BTRFS: device fsid a8680da2-059a-4648-a8e8-f62925ab33ec devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (448) Oct 8 20:03:05.559002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Oct 8 20:03:05.574959 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Oct 8 20:03:05.586904 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Oct 8 20:03:05.593400 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Oct 8 20:03:05.593647 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Oct 8 20:03:05.605961 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:03:05.624436 disk-uuid[550]: Primary Header is updated. Oct 8 20:03:05.624436 disk-uuid[550]: Secondary Entries is updated. Oct 8 20:03:05.624436 disk-uuid[550]: Secondary Header is updated. Oct 8 20:03:05.638601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:05.665614 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:05.674630 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:06.676983 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:03:06.677073 disk-uuid[551]: The operation has completed successfully. Oct 8 20:03:06.764037 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:03:06.764198 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:03:06.790968 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:03:06.826483 sh[568]: Success Oct 8 20:03:06.849833 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Oct 8 20:03:06.938711 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:03:06.947598 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:03:06.975312 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:03:07.026368 kernel: BTRFS info (device dm-0): first mount of filesystem a8680da2-059a-4648-a8e8-f62925ab33ec Oct 8 20:03:07.026455 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:07.026481 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:03:07.035793 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:03:07.042632 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:03:07.068637 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 20:03:07.071634 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:03:07.072555 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:03:07.079800 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:03:07.135912 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:07.135968 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:07.135985 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:03:07.151622 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:03:07.151703 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:03:07.164816 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:03:07.186776 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:07.201032 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:03:07.216859 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:03:07.308695 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:03:07.316887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:03:07.425279 systemd-networkd[750]: lo: Link UP Oct 8 20:03:07.425295 systemd-networkd[750]: lo: Gained carrier Oct 8 20:03:07.428093 ignition[667]: Ignition 2.19.0 Oct 8 20:03:07.428343 systemd-networkd[750]: Enumeration completed Oct 8 20:03:07.428104 ignition[667]: Stage: fetch-offline Oct 8 20:03:07.429107 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:07.428157 ignition[667]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.429114 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:03:07.428172 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.430608 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:03:07.428706 ignition[667]: parsed url from cmdline: "" Oct 8 20:03:07.430943 systemd-networkd[750]: eth0: Link UP Oct 8 20:03:07.428715 ignition[667]: no config URL provided Oct 8 20:03:07.430951 systemd-networkd[750]: eth0: Gained carrier Oct 8 20:03:07.428727 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:03:07.430964 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:07.428742 ignition[667]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:03:07.443689 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.66/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 8 20:03:07.428754 ignition[667]: failed to fetch config: resource requires networking Oct 8 20:03:07.452210 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:03:07.429076 ignition[667]: Ignition finished successfully Oct 8 20:03:07.459111 systemd[1]: Reached target network.target - Network. Oct 8 20:03:07.543382 ignition[759]: Ignition 2.19.0 Oct 8 20:03:07.492827 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:03:07.543391 ignition[759]: Stage: fetch Oct 8 20:03:07.555812 unknown[759]: fetched base config from "system" Oct 8 20:03:07.543634 ignition[759]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.555824 unknown[759]: fetched base config from "system" Oct 8 20:03:07.543648 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.555834 unknown[759]: fetched user config from "gcp" Oct 8 20:03:07.543798 ignition[759]: parsed url from cmdline: "" Oct 8 20:03:07.558917 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:03:07.543804 ignition[759]: no config URL provided Oct 8 20:03:07.570751 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:03:07.543810 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:03:07.613178 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:03:07.543822 ignition[759]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:03:07.644799 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:03:07.543844 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Oct 8 20:03:07.687127 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:03:07.548497 ignition[759]: GET result: OK Oct 8 20:03:07.705004 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:03:07.548653 ignition[759]: parsing config with SHA512: d2c2fd1c99feed08d3adff64e3786bf9f4984c6b8cd93115a5effacc25d0153e76ced746b7ecf56854884acaa058bf39975b5aeab15325fc7d423d05164b35b0 Oct 8 20:03:07.721783 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:03:07.556562 ignition[759]: fetch: fetch complete Oct 8 20:03:07.739787 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:03:07.556569 ignition[759]: fetch: fetch passed Oct 8 20:03:07.753790 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:03:07.556651 ignition[759]: Ignition finished successfully Oct 8 20:03:07.768776 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:03:07.597894 ignition[766]: Ignition 2.19.0 Oct 8 20:03:07.789829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:03:07.597903 ignition[766]: Stage: kargs Oct 8 20:03:07.598213 ignition[766]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.598228 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.599234 ignition[766]: kargs: kargs passed Oct 8 20:03:07.599290 ignition[766]: Ignition finished successfully Oct 8 20:03:07.673531 ignition[772]: Ignition 2.19.0 Oct 8 20:03:07.673540 ignition[772]: Stage: disks Oct 8 20:03:07.673786 ignition[772]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:07.673799 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:07.674812 ignition[772]: disks: disks passed Oct 8 20:03:07.674868 ignition[772]: Ignition finished successfully Oct 8 20:03:07.843173 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:03:08.025656 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:03:08.031778 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:03:08.168992 kernel: EXT4-fs (sda9): mounted filesystem 1df90f14-3ad0-4280-9b7d-a34f65d70e4d r/w with ordered data mode. Quota mode: none. Oct 8 20:03:08.169939 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:03:08.170965 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:03:08.203758 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:03:08.219890 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:03:08.242415 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 20:03:08.310905 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Oct 8 20:03:08.310972 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:08.310998 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:08.311019 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:03:08.311040 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:03:08.311063 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:03:08.242528 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:03:08.242600 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:03:08.323904 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:03:08.339748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:03:08.361878 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:03:08.508679 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:03:08.518820 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:03:08.529007 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:03:08.539789 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:03:08.676018 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:03:08.681878 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:03:08.721623 kernel: BTRFS info (device sda6): last unmount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:08.732914 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:03:08.744196 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:03:08.768291 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:03:08.786810 ignition[900]: INFO : Ignition 2.19.0 Oct 8 20:03:08.786810 ignition[900]: INFO : Stage: mount Oct 8 20:03:08.794999 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:08.794999 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:08.794999 ignition[900]: INFO : mount: mount passed Oct 8 20:03:08.794999 ignition[900]: INFO : Ignition finished successfully Oct 8 20:03:08.789559 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:03:08.814761 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:03:09.151796 systemd-networkd[750]: eth0: Gained IPv6LL Oct 8 20:03:09.175837 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:03:09.220609 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (912) Oct 8 20:03:09.238128 kernel: BTRFS info (device sda6): first mount of filesystem bfaca09e-98f3-46e8-bdd8-6fce748bf2b6 Oct 8 20:03:09.238210 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 8 20:03:09.238252 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:03:09.259931 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:03:09.260020 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:03:09.263072 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:03:09.302469 ignition[929]: INFO : Ignition 2.19.0 Oct 8 20:03:09.302469 ignition[929]: INFO : Stage: files Oct 8 20:03:09.316765 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:09.316765 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:09.316765 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:03:09.316765 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:03:09.316765 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:03:09.315931 unknown[929]: wrote ssh authorized keys file for user: core Oct 8 20:03:09.417743 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:03:09.417743 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 8 20:03:09.502703 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:03:09.699637 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:09.716757 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Oct 8 20:03:09.991065 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:03:10.475165 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Oct 8 20:03:10.475165 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:03:10.513773 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:03:10.513773 ignition[929]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:03:10.513773 ignition[929]: INFO : files: files passed Oct 8 20:03:10.513773 ignition[929]: INFO : Ignition finished successfully Oct 8 20:03:10.479289 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:03:10.499872 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:03:10.530140 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:03:10.542303 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:03:10.721820 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:10.721820 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:10.542423 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:03:10.767851 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:03:10.628244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:03:10.654071 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:03:10.675815 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:03:10.761243 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:03:10.761383 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:03:10.779148 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:03:10.802895 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:03:10.823954 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:03:10.828837 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:03:10.884603 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:03:10.903811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:03:10.952234 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:10.966962 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:10.989047 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:03:11.006919 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:03:11.007115 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:03:11.035996 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:03:11.054923 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:03:11.073961 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:03:11.093917 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:03:11.113014 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:03:11.133936 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:03:11.153972 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:03:11.175092 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:03:11.194996 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:03:11.216996 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:03:11.234958 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:03:11.235165 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:03:11.262028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:11.280954 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:11.301974 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:03:11.302133 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:11.323908 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:03:11.324123 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:03:11.354057 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:03:11.354331 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:03:11.378077 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:03:11.444791 ignition[981]: INFO : Ignition 2.19.0 Oct 8 20:03:11.444791 ignition[981]: INFO : Stage: umount Oct 8 20:03:11.444791 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:03:11.444791 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 8 20:03:11.444791 ignition[981]: INFO : umount: umount passed Oct 8 20:03:11.444791 ignition[981]: INFO : Ignition finished successfully Oct 8 20:03:11.378273 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:03:11.402895 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:03:11.435762 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:03:11.436117 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:11.461926 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:03:11.462922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:03:11.463154 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:11.512017 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:03:11.512204 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:03:11.542695 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:03:11.543718 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:03:11.543832 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:03:11.559474 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:03:11.559641 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:03:11.580925 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:03:11.581048 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:03:11.602895 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:03:11.602957 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:03:11.618859 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:03:11.618944 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:03:11.638854 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:03:11.638936 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:03:11.658833 systemd[1]: Stopped target network.target - Network. Oct 8 20:03:11.676740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:03:11.676862 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:03:11.698820 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:03:11.715771 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:03:11.719750 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:11.735775 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:03:11.751794 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:03:11.766845 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:03:11.766937 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:03:11.785847 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:03:11.785931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:03:11.803840 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:03:11.803941 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:03:11.825853 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:03:11.825950 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:03:11.843854 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:03:11.843949 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:03:11.862091 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:03:11.864655 systemd-networkd[750]: eth0: DHCPv6 lease lost Oct 8 20:03:11.879959 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:03:11.899301 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:03:11.899475 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:03:11.908628 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:03:11.908891 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:03:11.935751 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:03:11.935812 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:11.946850 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:03:11.959903 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:03:11.959987 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:03:11.985998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:03:12.443863 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Oct 8 20:03:11.986064 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:12.016087 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:03:12.016190 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:12.039002 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:03:12.039103 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:12.059285 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:12.083121 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:03:12.083374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:12.109992 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:03:12.110099 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:12.125944 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:03:12.126017 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:12.145886 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:03:12.145999 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:03:12.172797 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:03:12.172942 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:03:12.202971 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:03:12.203105 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:03:12.238861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:03:12.253770 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:03:12.253919 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:12.266968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:03:12.267067 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:12.285516 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:03:12.285694 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:03:12.305151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:03:12.305289 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:03:12.326488 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:03:12.349850 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:03:12.396388 systemd[1]: Switching root. Oct 8 20:03:12.746777 systemd-journald[183]: Journal stopped Oct 8 20:03:15.224930 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:03:15.225002 kernel: SELinux: policy capability open_perms=1 Oct 8 20:03:15.225018 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:03:15.225030 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:03:15.225040 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:03:15.225052 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:03:15.225064 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:03:15.225080 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:03:15.225094 kernel: audit: type=1403 audit(1728417793.058:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:03:15.225110 systemd[1]: Successfully loaded SELinux policy in 85.544ms. Oct 8 20:03:15.225125 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.682ms. Oct 8 20:03:15.225140 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:03:15.225153 systemd[1]: Detected virtualization google. Oct 8 20:03:15.225165 systemd[1]: Detected architecture x86-64. Oct 8 20:03:15.225183 systemd[1]: Detected first boot. Oct 8 20:03:15.225197 systemd[1]: Initializing machine ID from random generator. Oct 8 20:03:15.225210 zram_generator::config[1023]: No configuration found. Oct 8 20:03:15.225225 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:03:15.225239 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:03:15.225256 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:03:15.225269 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:03:15.225284 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:03:15.225297 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:03:15.225310 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:03:15.225325 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:03:15.225339 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:03:15.225356 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:03:15.225370 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:03:15.225384 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:03:15.225398 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:03:15.225412 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:03:15.225425 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:03:15.225439 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:03:15.225456 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:03:15.225473 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:03:15.225487 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 20:03:15.225500 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:03:15.225513 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:03:15.225527 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:03:15.225547 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:03:15.225566 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:03:15.225593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:03:15.225607 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:03:15.225626 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:03:15.225640 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:03:15.225655 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:03:15.225669 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:03:15.225683 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:03:15.225697 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:03:15.225711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:03:15.225730 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:03:15.225744 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:03:15.225758 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:03:15.225772 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:03:15.225788 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:15.225806 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:03:15.225820 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:03:15.225834 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:03:15.225857 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:03:15.225872 systemd[1]: Reached target machines.target - Containers. Oct 8 20:03:15.225886 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:03:15.225900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:15.225914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:03:15.225933 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:03:15.225948 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:15.225962 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:03:15.225976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:15.225990 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:03:15.226004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:03:15.226018 kernel: fuse: init (API version 7.39) Oct 8 20:03:15.226031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:03:15.226049 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:03:15.226064 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:03:15.226077 kernel: ACPI: bus type drm_connector registered Oct 8 20:03:15.226092 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:03:15.226109 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:03:15.226123 kernel: loop: module loaded Oct 8 20:03:15.226136 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:03:15.226150 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:03:15.226165 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:03:15.226228 systemd-journald[1110]: Collecting audit messages is disabled. Oct 8 20:03:15.226267 systemd-journald[1110]: Journal started Oct 8 20:03:15.226299 systemd-journald[1110]: Runtime Journal (/run/log/journal/bd7b1aea08064cfbb95235a7c242e482) is 8.0M, max 148.7M, 140.7M free. Oct 8 20:03:13.990609 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:03:14.013484 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 20:03:14.014176 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:03:15.249616 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:03:15.273650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:03:15.291613 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:03:15.291735 systemd[1]: Stopped verity-setup.service. Oct 8 20:03:15.321664 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:15.332652 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:03:15.344444 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:03:15.356325 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:03:15.366110 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:03:15.377182 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:03:15.387091 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:03:15.397451 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:03:15.408327 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:03:15.420341 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:03:15.432324 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:03:15.432639 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:03:15.444306 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:15.444601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:15.456345 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:03:15.456627 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:03:15.467351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:15.467650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:15.479377 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:03:15.479666 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:03:15.490639 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:03:15.491037 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:03:15.501407 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:03:15.512378 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:03:15.524357 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:03:15.536338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:03:15.563566 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:03:15.580817 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:03:15.602792 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:03:15.612928 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:03:15.613253 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:03:15.625437 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:03:15.641905 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:03:15.666007 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:03:15.676109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:15.685364 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:03:15.706205 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:03:15.719916 systemd-journald[1110]: Time spent on flushing to /var/log/journal/bd7b1aea08064cfbb95235a7c242e482 is 152.085ms for 923 entries. Oct 8 20:03:15.719916 systemd-journald[1110]: System Journal (/var/log/journal/bd7b1aea08064cfbb95235a7c242e482) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:03:15.895524 systemd-journald[1110]: Received client request to flush runtime journal. Oct 8 20:03:15.895662 kernel: loop0: detected capacity change from 0 to 54824 Oct 8 20:03:15.717913 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:03:15.734946 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:03:15.745007 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:03:15.763931 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:03:15.794899 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:03:15.814883 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:03:15.831903 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:03:15.855080 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:03:15.867365 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:03:15.888619 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:03:15.900483 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:03:15.912832 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:03:15.924336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:03:15.957765 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:03:15.964098 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:03:15.986854 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:03:16.003160 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 20:03:16.007261 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:03:16.016720 kernel: loop1: detected capacity change from 0 to 210664 Oct 8 20:03:16.032937 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:03:16.050561 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:03:16.052277 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:03:16.126021 kernel: loop2: detected capacity change from 0 to 140768 Oct 8 20:03:16.123397 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Oct 8 20:03:16.123433 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Oct 8 20:03:16.150788 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:03:16.245638 kernel: loop3: detected capacity change from 0 to 142488 Oct 8 20:03:16.355649 kernel: loop4: detected capacity change from 0 to 54824 Oct 8 20:03:16.388315 kernel: loop5: detected capacity change from 0 to 210664 Oct 8 20:03:16.441194 kernel: loop6: detected capacity change from 0 to 140768 Oct 8 20:03:16.500614 kernel: loop7: detected capacity change from 0 to 142488 Oct 8 20:03:16.560657 (sd-merge)[1166]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Oct 8 20:03:16.561741 (sd-merge)[1166]: Merged extensions into '/usr'. Oct 8 20:03:16.571591 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:03:16.572371 systemd[1]: Reloading... Oct 8 20:03:16.746627 zram_generator::config[1188]: No configuration found. Oct 8 20:03:16.961611 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:03:17.059521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:17.167658 systemd[1]: Reloading finished in 594 ms. Oct 8 20:03:17.201952 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:03:17.212493 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:03:17.234831 systemd[1]: Starting ensure-sysext.service... Oct 8 20:03:17.256854 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 20:03:17.274782 systemd[1]: Reloading requested from client PID 1232 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:03:17.274804 systemd[1]: Reloading... Oct 8 20:03:17.321329 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:03:17.322094 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:03:17.324180 systemd-tmpfiles[1233]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:03:17.324920 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Oct 8 20:03:17.325121 systemd-tmpfiles[1233]: ACLs are not supported, ignoring. Oct 8 20:03:17.331032 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:03:17.331056 systemd-tmpfiles[1233]: Skipping /boot Oct 8 20:03:17.357195 systemd-tmpfiles[1233]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:03:17.359828 systemd-tmpfiles[1233]: Skipping /boot Oct 8 20:03:17.412661 zram_generator::config[1256]: No configuration found. Oct 8 20:03:17.558743 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:17.623951 systemd[1]: Reloading finished in 348 ms. Oct 8 20:03:17.644628 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:03:17.661270 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 20:03:17.684937 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:17.703872 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:03:17.726819 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:03:17.752830 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:03:17.773925 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:03:17.778599 augenrules[1321]: No rules Oct 8 20:03:17.794906 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:03:17.810129 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:17.820358 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:03:17.837901 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Oct 8 20:03:17.847739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:17.848378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:17.856122 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:17.878657 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:17.900072 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:03:17.909855 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:17.922802 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:03:17.944749 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:03:17.955717 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:17.958992 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:03:17.972092 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:03:17.984623 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:03:17.996488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:17.998658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:18.010532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:18.012850 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:18.024484 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:03:18.025800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:03:18.037393 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:03:18.060488 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:03:18.093605 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1349) Oct 8 20:03:18.116265 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:18.116635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:18.124944 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:18.145622 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1349) Oct 8 20:03:18.161871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:18.184105 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:03:18.195013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:18.207740 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:03:18.217811 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:03:18.218480 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:18.222839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:18.223653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:18.242613 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 8 20:03:18.245178 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:18.245478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:18.265633 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1354) Oct 8 20:03:18.268765 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:03:18.269084 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:03:18.276477 kernel: ACPI: button: Power Button [PWRF] Oct 8 20:03:18.302983 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 20:03:18.312316 systemd[1]: Finished ensure-sysext.service. Oct 8 20:03:18.330285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:18.330627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:03:18.342877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:03:18.346626 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Oct 8 20:03:18.352638 kernel: ACPI: button: Sleep Button [SLPF] Oct 8 20:03:18.387660 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Oct 8 20:03:18.380879 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:03:18.383327 systemd-resolved[1317]: Positive Trust Anchors: Oct 8 20:03:18.383344 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:03:18.383413 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 20:03:18.405948 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:03:18.414097 systemd-resolved[1317]: Defaulting to hostname 'linux'. Oct 8 20:03:18.476220 kernel: EDAC MC: Ver: 3.0.0 Oct 8 20:03:18.477030 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:03:18.488648 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 8 20:03:18.505956 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 8 20:03:18.514914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:03:18.515052 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:03:18.525846 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:03:18.525905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 8 20:03:18.526810 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:03:18.537531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:03:18.538957 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:03:18.551550 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:03:18.551853 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:03:18.563347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:03:18.563756 systemd-networkd[1372]: lo: Link UP Oct 8 20:03:18.563770 systemd-networkd[1372]: lo: Gained carrier Oct 8 20:03:18.565537 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:03:18.569244 systemd-networkd[1372]: Enumeration completed Oct 8 20:03:18.571444 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:18.571460 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:03:18.573762 systemd-networkd[1372]: eth0: Link UP Oct 8 20:03:18.573902 systemd-networkd[1372]: eth0: Gained carrier Oct 8 20:03:18.574013 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:03:18.577031 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:03:18.584705 systemd-networkd[1372]: eth0: DHCPv4 address 10.128.0.66/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 8 20:03:18.587309 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:03:18.587647 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:03:18.612800 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:03:18.628239 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 8 20:03:18.661258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Oct 8 20:03:18.673337 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:03:18.687293 systemd[1]: Reached target network.target - Network. Oct 8 20:03:18.695807 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:03:18.716993 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:03:18.737116 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Oct 8 20:03:18.739869 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:03:18.756937 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:03:18.780557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:03:18.791778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:03:18.791919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:03:18.799552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:03:18.801871 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:03:18.805740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:03:18.815032 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:03:18.816943 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Oct 8 20:03:18.836229 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:03:18.848931 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:03:18.874927 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:03:18.942354 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:03:18.955278 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:03:18.965071 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:03:18.976990 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:03:18.989163 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:03:18.999126 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:03:19.010893 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:03:19.021860 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:03:19.021932 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:03:19.030891 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:03:19.040950 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:03:19.053042 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:03:19.080031 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:03:19.090849 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:03:19.101068 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:03:19.111887 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:03:19.120940 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:03:19.120990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:03:19.126847 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:03:19.149874 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:03:19.167277 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:03:19.208832 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:03:19.226019 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:03:19.235801 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:03:19.244020 jq[1431]: false Oct 8 20:03:19.248848 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:03:19.270006 systemd[1]: Started ntpd.service - Network Time Service. Oct 8 20:03:19.273471 coreos-metadata[1429]: Oct 08 20:03:19.271 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Oct 8 20:03:19.279657 coreos-metadata[1429]: Oct 08 20:03:19.277 INFO Fetch successful Oct 8 20:03:19.279657 coreos-metadata[1429]: Oct 08 20:03:19.279 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Oct 8 20:03:19.280607 coreos-metadata[1429]: Oct 08 20:03:19.280 INFO Fetch successful Oct 8 20:03:19.286182 coreos-metadata[1429]: Oct 08 20:03:19.282 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Oct 8 20:03:19.285881 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:03:19.287836 coreos-metadata[1429]: Oct 08 20:03:19.287 INFO Fetch successful Oct 8 20:03:19.288171 coreos-metadata[1429]: Oct 08 20:03:19.288 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Oct 8 20:03:19.289782 coreos-metadata[1429]: Oct 08 20:03:19.289 INFO Fetch successful Oct 8 20:03:19.303922 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:03:19.324625 extend-filesystems[1434]: Found loop4 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found loop5 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found loop6 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found loop7 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda1 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda2 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda3 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found usr Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda4 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda6 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda7 Oct 8 20:03:19.324625 extend-filesystems[1434]: Found sda9 Oct 8 20:03:19.324625 extend-filesystems[1434]: Checking size of /dev/sda9 Oct 8 20:03:19.517049 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Oct 8 20:03:19.517115 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:52:25 UTC 2024 (1): Starting Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: ---------------------------------------------------- Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: corporation. Support and training for ntp-4 are Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: available at https://www.nwtime.org/support Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: ---------------------------------------------------- Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: proto: precision = 0.071 usec (-24) Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: basedate set to 2024-09-26 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: gps base set to 2024-09-29 (week 2334) Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Listen normally on 3 eth0 10.128.0.66:123 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Listen normally on 4 lo [::1]:123 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:42%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:42%2#123 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:42%2 Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 20:03:19.517210 ntpd[1436]: 8 Oct 20:03:19 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 20:03:19.326667 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:03:19.373232 dbus-daemon[1430]: [system] SELinux support is enabled Oct 8 20:03:19.519382 extend-filesystems[1434]: Resized partition /dev/sda9 Oct 8 20:03:19.351076 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:03:19.377669 dbus-daemon[1430]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1372 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 8 20:03:19.526377 extend-filesystems[1454]: resize2fs 1.47.1 (20-May-2024) Oct 8 20:03:19.526377 extend-filesystems[1454]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 20:03:19.526377 extend-filesystems[1454]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 8 20:03:19.526377 extend-filesystems[1454]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Oct 8 20:03:19.621833 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1354) Oct 8 20:03:19.404615 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Oct 8 20:03:19.400488 ntpd[1436]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:52:25 UTC 2024 (1): Starting Oct 8 20:03:19.627858 extend-filesystems[1434]: Resized filesystem in /dev/sda9 Oct 8 20:03:19.407695 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:03:19.400534 ntpd[1436]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 20:03:19.415758 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:03:19.400550 ntpd[1436]: ---------------------------------------------------- Oct 8 20:03:19.629860 update_engine[1457]: I20241008 20:03:19.585625 1457 main.cc:92] Flatcar Update Engine starting Oct 8 20:03:19.629860 update_engine[1457]: I20241008 20:03:19.593852 1457 update_check_scheduler.cc:74] Next update check in 8m30s Oct 8 20:03:19.445967 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:03:19.400564 ntpd[1436]: ntp-4 is maintained by Network Time Foundation, Oct 8 20:03:19.630506 jq[1460]: true Oct 8 20:03:19.449337 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:03:19.400624 ntpd[1436]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 20:03:19.502259 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:03:19.400641 ntpd[1436]: corporation. Support and training for ntp-4 are Oct 8 20:03:19.502664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:03:19.400655 ntpd[1436]: available at https://www.nwtime.org/support Oct 8 20:03:19.503278 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:03:19.400670 ntpd[1436]: ---------------------------------------------------- Oct 8 20:03:19.505191 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:03:19.402855 ntpd[1436]: proto: precision = 0.071 usec (-24) Oct 8 20:03:19.525295 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:03:19.403339 ntpd[1436]: basedate set to 2024-09-26 Oct 8 20:03:19.525672 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:03:19.403363 ntpd[1436]: gps base set to 2024-09-29 (week 2334) Oct 8 20:03:19.540408 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Oct 8 20:03:19.406003 ntpd[1436]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 20:03:19.540445 systemd-logind[1449]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 8 20:03:19.406068 ntpd[1436]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 20:03:19.540477 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 8 20:03:19.406372 ntpd[1436]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 20:03:19.540875 systemd-logind[1449]: New seat seat0. Oct 8 20:03:19.406445 ntpd[1436]: Listen normally on 3 eth0 10.128.0.66:123 Oct 8 20:03:19.542813 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:03:19.406519 ntpd[1436]: Listen normally on 4 lo [::1]:123 Oct 8 20:03:19.580967 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:03:19.406622 ntpd[1436]: bind(21) AF_INET6 fe80::4001:aff:fe80:42%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 20:03:19.582708 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:03:19.406655 ntpd[1436]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:42%2#123 Oct 8 20:03:19.406680 ntpd[1436]: failed to init interface for address fe80::4001:aff:fe80:42%2 Oct 8 20:03:19.406732 ntpd[1436]: Listening on routing socket on fd #21 for interface updates Oct 8 20:03:19.409204 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 20:03:19.409242 ntpd[1436]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 20:03:19.645211 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:03:19.683019 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 8 20:03:19.704012 jq[1467]: true Oct 8 20:03:19.744686 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:03:19.770357 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:03:19.786222 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:03:19.798611 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:03:19.798970 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:03:19.799218 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:03:19.826080 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 8 20:03:19.836829 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:03:19.837966 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:03:19.840772 systemd-networkd[1372]: eth0: Gained IPv6LL Oct 8 20:03:19.846023 tar[1466]: linux-amd64/helm Oct 8 20:03:19.862514 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:03:19.881665 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:03:19.896875 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:03:19.922949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:19.950137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:03:19.974354 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:03:19.979434 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Oct 8 20:03:19.989177 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:03:20.031756 systemd[1]: Starting sshkeys.service... Oct 8 20:03:20.087236 init.sh[1504]: + '[' -e /etc/default/instance_configs.cfg.template ']' Oct 8 20:03:20.090606 init.sh[1504]: + echo -e '[InstanceSetup]\nset_host_keys = false' Oct 8 20:03:20.090606 init.sh[1504]: + /usr/bin/google_instance_setup Oct 8 20:03:20.169734 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:03:20.190334 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:03:20.223704 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:03:20.306189 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 8 20:03:20.306467 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 8 20:03:20.309063 dbus-daemon[1430]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1489 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 8 20:03:20.328897 systemd[1]: Starting polkit.service - Authorization Manager... Oct 8 20:03:20.418911 coreos-metadata[1518]: Oct 08 20:03:20.418 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Oct 8 20:03:20.434734 coreos-metadata[1518]: Oct 08 20:03:20.432 INFO Fetch failed with 404: resource not found Oct 8 20:03:20.434734 coreos-metadata[1518]: Oct 08 20:03:20.432 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Oct 8 20:03:20.434734 coreos-metadata[1518]: Oct 08 20:03:20.434 INFO Fetch successful Oct 8 20:03:20.434734 coreos-metadata[1518]: Oct 08 20:03:20.434 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Oct 8 20:03:20.440534 coreos-metadata[1518]: Oct 08 20:03:20.437 INFO Fetch failed with 404: resource not found Oct 8 20:03:20.440534 coreos-metadata[1518]: Oct 08 20:03:20.437 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Oct 8 20:03:20.440534 coreos-metadata[1518]: Oct 08 20:03:20.439 INFO Fetch failed with 404: resource not found Oct 8 20:03:20.440534 coreos-metadata[1518]: Oct 08 20:03:20.439 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Oct 8 20:03:20.442216 coreos-metadata[1518]: Oct 08 20:03:20.441 INFO Fetch successful Oct 8 20:03:20.447972 unknown[1518]: wrote ssh authorized keys file for user: core Oct 8 20:03:20.508217 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:03:20.523125 polkitd[1527]: Started polkitd version 121 Oct 8 20:03:20.528039 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:03:20.527712 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:03:20.551202 systemd[1]: Finished sshkeys.service. Oct 8 20:03:20.558061 polkitd[1527]: Loading rules from directory /etc/polkit-1/rules.d Oct 8 20:03:20.558205 polkitd[1527]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 8 20:03:20.568984 polkitd[1527]: Finished loading, compiling and executing 2 rules Oct 8 20:03:20.575310 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 8 20:03:20.575661 systemd[1]: Started polkit.service - Authorization Manager. Oct 8 20:03:20.578702 polkitd[1527]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 8 20:03:20.591655 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:03:20.645869 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:03:20.654280 systemd-hostnamed[1489]: Hostname set to (transient) Oct 8 20:03:20.665128 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:03:20.666375 systemd-resolved[1317]: System hostname changed to 'ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal'. Oct 8 20:03:20.687018 systemd[1]: Started sshd@0-10.128.0.66:22-139.178.68.195:55578.service - OpenSSH per-connection server daemon (139.178.68.195:55578). Oct 8 20:03:20.757085 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:03:20.758019 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:03:20.778656 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:03:20.858102 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:03:20.882744 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:03:20.901226 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 20:03:20.908458 containerd[1468]: time="2024-10-08T20:03:20.906531440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 20:03:20.912092 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:03:21.022693 containerd[1468]: time="2024-10-08T20:03:21.022487959Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:21.028930 containerd[1468]: time="2024-10-08T20:03:21.028250126Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:21.028930 containerd[1468]: time="2024-10-08T20:03:21.028337302Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:03:21.028930 containerd[1468]: time="2024-10-08T20:03:21.028367895Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:03:21.030046 containerd[1468]: time="2024-10-08T20:03:21.029412347Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:03:21.030046 containerd[1468]: time="2024-10-08T20:03:21.029463567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:21.030046 containerd[1468]: time="2024-10-08T20:03:21.029566552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:21.030046 containerd[1468]: time="2024-10-08T20:03:21.029633672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:21.030046 containerd[1468]: time="2024-10-08T20:03:21.029983360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:21.030046 containerd[1468]: time="2024-10-08T20:03:21.030015180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:21.030046 containerd[1468]: time="2024-10-08T20:03:21.030038886Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:21.031081 containerd[1468]: time="2024-10-08T20:03:21.030056283Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:21.031081 containerd[1468]: time="2024-10-08T20:03:21.030177960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:21.031081 containerd[1468]: time="2024-10-08T20:03:21.030552013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:03:21.031243 containerd[1468]: time="2024-10-08T20:03:21.031120291Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:03:21.031243 containerd[1468]: time="2024-10-08T20:03:21.031152441Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:03:21.031337 containerd[1468]: time="2024-10-08T20:03:21.031288089Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:03:21.032613 containerd[1468]: time="2024-10-08T20:03:21.031387465Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:03:21.043278 containerd[1468]: time="2024-10-08T20:03:21.043212730Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:03:21.043692 containerd[1468]: time="2024-10-08T20:03:21.043346206Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:03:21.043692 containerd[1468]: time="2024-10-08T20:03:21.043441966Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:03:21.043692 containerd[1468]: time="2024-10-08T20:03:21.043470930Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:03:21.043692 containerd[1468]: time="2024-10-08T20:03:21.043498853Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:03:21.045112 containerd[1468]: time="2024-10-08T20:03:21.043786471Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045372032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045571451Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045619490Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045645377Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045671176Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045695977Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045720301Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045747310Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045777298Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045802350Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045826319Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045846925Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045881248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.046829 containerd[1468]: time="2024-10-08T20:03:21.045908174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.045931092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.045960806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.045984275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046009992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046031900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046056280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046081968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046113939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046133867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046155509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046178533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046206281Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046245464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046267868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.047591 containerd[1468]: time="2024-10-08T20:03:21.046289757Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:03:21.048213 containerd[1468]: time="2024-10-08T20:03:21.048033677Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:03:21.048213 containerd[1468]: time="2024-10-08T20:03:21.048192917Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:03:21.048303 containerd[1468]: time="2024-10-08T20:03:21.048217368Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:03:21.048303 containerd[1468]: time="2024-10-08T20:03:21.048242392Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:03:21.048303 containerd[1468]: time="2024-10-08T20:03:21.048262666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.048303 containerd[1468]: time="2024-10-08T20:03:21.048289938Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:03:21.048469 containerd[1468]: time="2024-10-08T20:03:21.048308877Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:03:21.048469 containerd[1468]: time="2024-10-08T20:03:21.048337642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:03:21.050573 containerd[1468]: time="2024-10-08T20:03:21.048847021Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:03:21.050573 containerd[1468]: time="2024-10-08T20:03:21.048951489Z" level=info msg="Connect containerd service" Oct 8 20:03:21.050573 containerd[1468]: time="2024-10-08T20:03:21.049021031Z" level=info msg="using legacy CRI server" Oct 8 20:03:21.050573 containerd[1468]: time="2024-10-08T20:03:21.049034935Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:03:21.050573 containerd[1468]: time="2024-10-08T20:03:21.049224520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:03:21.055797 containerd[1468]: time="2024-10-08T20:03:21.055555759Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056424052Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056539254Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056707726Z" level=info msg="Start subscribing containerd event" Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056784329Z" level=info msg="Start recovering state" Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056922020Z" level=info msg="Start event monitor" Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056949616Z" level=info msg="Start snapshots syncer" Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056970070Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:03:21.057119 containerd[1468]: time="2024-10-08T20:03:21.056990786Z" level=info msg="Start streaming server" Oct 8 20:03:21.057275 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:03:21.061811 containerd[1468]: time="2024-10-08T20:03:21.061553763Z" level=info msg="containerd successfully booted in 0.163566s" Oct 8 20:03:21.308220 sshd[1550]: Accepted publickey for core from 139.178.68.195 port 55578 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:03:21.311069 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:21.342843 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:03:21.353206 tar[1466]: linux-amd64/LICENSE Oct 8 20:03:21.353206 tar[1466]: linux-amd64/README.md Oct 8 20:03:21.363275 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:03:21.396845 systemd-logind[1449]: New session 1 of user core. Oct 8 20:03:21.422811 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:03:21.440699 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:03:21.462193 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:03:21.504905 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:03:21.566341 instance-setup[1512]: INFO Running google_set_multiqueue. Oct 8 20:03:21.592792 instance-setup[1512]: INFO Set channels for eth0 to 2. Oct 8 20:03:21.599798 instance-setup[1512]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Oct 8 20:03:21.602212 instance-setup[1512]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Oct 8 20:03:21.602813 instance-setup[1512]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Oct 8 20:03:21.605014 instance-setup[1512]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Oct 8 20:03:21.605628 instance-setup[1512]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Oct 8 20:03:21.607814 instance-setup[1512]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Oct 8 20:03:21.610239 instance-setup[1512]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Oct 8 20:03:21.610489 instance-setup[1512]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Oct 8 20:03:21.627888 instance-setup[1512]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Oct 8 20:03:21.636351 instance-setup[1512]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Oct 8 20:03:21.639785 instance-setup[1512]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Oct 8 20:03:21.639843 instance-setup[1512]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Oct 8 20:03:21.688014 init.sh[1504]: + /usr/bin/google_metadata_script_runner --script-type startup Oct 8 20:03:21.748524 systemd[1570]: Queued start job for default target default.target. Oct 8 20:03:21.756899 systemd[1570]: Created slice app.slice - User Application Slice. Oct 8 20:03:21.756950 systemd[1570]: Reached target paths.target - Paths. Oct 8 20:03:21.756978 systemd[1570]: Reached target timers.target - Timers. Oct 8 20:03:21.761100 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:03:21.794660 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:03:21.794886 systemd[1570]: Reached target sockets.target - Sockets. Oct 8 20:03:21.794916 systemd[1570]: Reached target basic.target - Basic System. Oct 8 20:03:21.795103 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:03:21.795417 systemd[1570]: Reached target default.target - Main User Target. Oct 8 20:03:21.795501 systemd[1570]: Startup finished in 275ms. Oct 8 20:03:21.813869 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:03:21.935865 startup-script[1604]: INFO Starting startup scripts. Oct 8 20:03:21.943733 startup-script[1604]: INFO No startup scripts found in metadata. Oct 8 20:03:21.943813 startup-script[1604]: INFO Finished running startup scripts. Oct 8 20:03:21.973395 init.sh[1504]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Oct 8 20:03:21.973395 init.sh[1504]: + daemon_pids=() Oct 8 20:03:21.973395 init.sh[1504]: + for d in accounts clock_skew network Oct 8 20:03:21.973395 init.sh[1504]: + daemon_pids+=($!) Oct 8 20:03:21.973395 init.sh[1504]: + for d in accounts clock_skew network Oct 8 20:03:21.973761 init.sh[1504]: + daemon_pids+=($!) Oct 8 20:03:21.973761 init.sh[1504]: + for d in accounts clock_skew network Oct 8 20:03:21.974334 init.sh[1504]: + daemon_pids+=($!) Oct 8 20:03:21.974334 init.sh[1504]: + NOTIFY_SOCKET=/run/systemd/notify Oct 8 20:03:21.974334 init.sh[1504]: + /usr/bin/systemd-notify --ready Oct 8 20:03:21.974509 init.sh[1610]: + /usr/bin/google_accounts_daemon Oct 8 20:03:21.974944 init.sh[1611]: + /usr/bin/google_clock_skew_daemon Oct 8 20:03:21.975240 init.sh[1612]: + /usr/bin/google_network_daemon Oct 8 20:03:22.002458 systemd[1]: Started oem-gce.service - GCE Linux Agent. Oct 8 20:03:22.020096 init.sh[1504]: + wait -n 1610 1611 1612 Oct 8 20:03:22.144098 systemd[1]: Started sshd@1-10.128.0.66:22-139.178.68.195:35960.service - OpenSSH per-connection server daemon (139.178.68.195:35960). Oct 8 20:03:22.402523 ntpd[1436]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:42%2]:123 Oct 8 20:03:22.403394 ntpd[1436]: 8 Oct 20:03:22 ntpd[1436]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:42%2]:123 Oct 8 20:03:22.553280 google-clock-skew[1611]: INFO Starting Google Clock Skew daemon. Oct 8 20:03:22.577619 sshd[1616]: Accepted publickey for core from 139.178.68.195 port 35960 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:03:22.578196 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:22.590504 systemd-logind[1449]: New session 2 of user core. Oct 8 20:03:22.597737 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:03:22.594510 google-clock-skew[1611]: INFO Clock drift token has changed: 0. Oct 8 20:03:22.604382 google-networking[1612]: INFO Starting Google Networking daemon. Oct 8 20:03:22.647969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:22.660911 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:03:22.667324 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:22.670190 groupadd[1627]: group added to /etc/group: name=google-sudoers, GID=1000 Oct 8 20:03:22.671436 systemd[1]: Startup finished in 1.193s (kernel) + 9.257s (initrd) + 9.696s (userspace) = 20.146s. Oct 8 20:03:22.675559 groupadd[1627]: group added to /etc/gshadow: name=google-sudoers Oct 8 20:03:22.747778 groupadd[1627]: new group: name=google-sudoers, GID=1000 Oct 8 20:03:22.784106 google-accounts[1610]: INFO Starting Google Accounts daemon. Oct 8 20:03:22.806081 google-accounts[1610]: WARNING OS Login not installed. Oct 8 20:03:22.808338 google-accounts[1610]: INFO Creating a new user account for 0. Oct 8 20:03:22.815239 init.sh[1647]: useradd: invalid user name '0': use --badname to ignore Oct 8 20:03:22.814810 google-accounts[1610]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Oct 8 20:03:22.856825 sshd[1616]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:22.865544 systemd[1]: sshd@1-10.128.0.66:22-139.178.68.195:35960.service: Deactivated successfully. Oct 8 20:03:22.866675 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:03:22.870316 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:03:22.873630 systemd-logind[1449]: Removed session 2. Oct 8 20:03:22.929385 systemd[1]: Started sshd@2-10.128.0.66:22-139.178.68.195:35962.service - OpenSSH per-connection server daemon (139.178.68.195:35962). Oct 8 20:03:23.000893 systemd-resolved[1317]: Clock change detected. Flushing caches. Oct 8 20:03:23.002004 google-clock-skew[1611]: INFO Synced system time with hardware clock. Oct 8 20:03:23.292140 sshd[1656]: Accepted publickey for core from 139.178.68.195 port 35962 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:03:23.295461 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:23.310116 systemd-logind[1449]: New session 3 of user core. Oct 8 20:03:23.314690 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:03:23.564565 sshd[1656]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:23.574174 systemd[1]: sshd@2-10.128.0.66:22-139.178.68.195:35962.service: Deactivated successfully. Oct 8 20:03:23.579165 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:03:23.582422 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:03:23.585205 systemd-logind[1449]: Removed session 3. Oct 8 20:03:23.609278 kubelet[1634]: E1008 20:03:23.609188 1634 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:23.613248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:23.613521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:23.614080 systemd[1]: kubelet.service: Consumed 1.297s CPU time. Oct 8 20:03:23.638670 systemd[1]: Started sshd@3-10.128.0.66:22-139.178.68.195:35966.service - OpenSSH per-connection server daemon (139.178.68.195:35966). Oct 8 20:03:24.016690 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 35966 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:03:24.019093 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:24.026087 systemd-logind[1449]: New session 4 of user core. Oct 8 20:03:24.034378 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:03:24.296806 sshd[1667]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:24.302515 systemd[1]: sshd@3-10.128.0.66:22-139.178.68.195:35966.service: Deactivated successfully. Oct 8 20:03:24.304963 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:03:24.305922 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:03:24.307459 systemd-logind[1449]: Removed session 4. Oct 8 20:03:24.368884 systemd[1]: Started sshd@4-10.128.0.66:22-139.178.68.195:35982.service - OpenSSH per-connection server daemon (139.178.68.195:35982). Oct 8 20:03:24.757689 sshd[1674]: Accepted publickey for core from 139.178.68.195 port 35982 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:03:24.759666 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:24.765962 systemd-logind[1449]: New session 5 of user core. Oct 8 20:03:24.773255 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:03:24.997589 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:03:24.998134 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:25.012297 sudo[1677]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:25.071388 sshd[1674]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:25.075871 systemd[1]: sshd@4-10.128.0.66:22-139.178.68.195:35982.service: Deactivated successfully. Oct 8 20:03:25.078345 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:03:25.080421 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:03:25.081923 systemd-logind[1449]: Removed session 5. Oct 8 20:03:25.145522 systemd[1]: Started sshd@5-10.128.0.66:22-139.178.68.195:35996.service - OpenSSH per-connection server daemon (139.178.68.195:35996). Oct 8 20:03:25.530049 sshd[1682]: Accepted publickey for core from 139.178.68.195 port 35996 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:03:25.532189 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:25.538227 systemd-logind[1449]: New session 6 of user core. Oct 8 20:03:25.543231 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:03:25.756108 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:03:25.756637 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:25.761634 sudo[1686]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:25.775377 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:03:25.775864 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:25.792411 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:25.796468 auditctl[1689]: No rules Oct 8 20:03:25.797088 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:03:25.797374 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:25.804559 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:03:25.837943 augenrules[1707]: No rules Oct 8 20:03:25.838848 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:03:25.840364 sudo[1685]: pam_unix(sudo:session): session closed for user root Oct 8 20:03:25.899167 sshd[1682]: pam_unix(sshd:session): session closed for user core Oct 8 20:03:25.903498 systemd[1]: sshd@5-10.128.0.66:22-139.178.68.195:35996.service: Deactivated successfully. Oct 8 20:03:25.905879 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:03:25.907647 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:03:25.909498 systemd-logind[1449]: Removed session 6. Oct 8 20:03:25.969953 systemd[1]: Started sshd@6-10.128.0.66:22-139.178.68.195:36004.service - OpenSSH per-connection server daemon (139.178.68.195:36004). Oct 8 20:03:26.354107 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 36004 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:03:26.356257 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:03:26.363060 systemd-logind[1449]: New session 7 of user core. Oct 8 20:03:26.373329 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:03:26.579971 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:03:26.580531 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 20:03:27.054450 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:03:27.068828 (dockerd)[1733]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:03:27.516724 dockerd[1733]: time="2024-10-08T20:03:27.516534623Z" level=info msg="Starting up" Oct 8 20:03:27.843478 dockerd[1733]: time="2024-10-08T20:03:27.843407949Z" level=info msg="Loading containers: start." Oct 8 20:03:27.997073 kernel: Initializing XFRM netlink socket Oct 8 20:03:28.113872 systemd-networkd[1372]: docker0: Link UP Oct 8 20:03:28.135000 dockerd[1733]: time="2024-10-08T20:03:28.134932549Z" level=info msg="Loading containers: done." Oct 8 20:03:28.153803 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3818033770-merged.mount: Deactivated successfully. Oct 8 20:03:28.155168 dockerd[1733]: time="2024-10-08T20:03:28.153934086Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:03:28.155168 dockerd[1733]: time="2024-10-08T20:03:28.154116653Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 20:03:28.155168 dockerd[1733]: time="2024-10-08T20:03:28.154275277Z" level=info msg="Daemon has completed initialization" Oct 8 20:03:28.197880 dockerd[1733]: time="2024-10-08T20:03:28.197113608Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:03:28.197394 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:03:29.226058 containerd[1468]: time="2024-10-08T20:03:29.225937770Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 20:03:29.733828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501187020.mount: Deactivated successfully. Oct 8 20:03:31.509738 containerd[1468]: time="2024-10-08T20:03:31.509647176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:31.511481 containerd[1468]: time="2024-10-08T20:03:31.511405417Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=32760725" Oct 8 20:03:31.512764 containerd[1468]: time="2024-10-08T20:03:31.512674172Z" level=info msg="ImageCreate event name:\"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:31.520088 containerd[1468]: time="2024-10-08T20:03:31.517876830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:31.521483 containerd[1468]: time="2024-10-08T20:03:31.520545488Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"32750897\" in 2.294529961s" Oct 8 20:03:31.521483 containerd[1468]: time="2024-10-08T20:03:31.520615506Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:e9adc5c075a83b20d2e1f3d047811c0d3a6d89686da0c85549e5757facdcabdb\"" Oct 8 20:03:31.557754 containerd[1468]: time="2024-10-08T20:03:31.557672289Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 20:03:33.295869 containerd[1468]: time="2024-10-08T20:03:33.295776608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:33.297727 containerd[1468]: time="2024-10-08T20:03:33.297661430Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=29593586" Oct 8 20:03:33.299356 containerd[1468]: time="2024-10-08T20:03:33.299270972Z" level=info msg="ImageCreate event name:\"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:33.303765 containerd[1468]: time="2024-10-08T20:03:33.303664324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:33.305452 containerd[1468]: time="2024-10-08T20:03:33.305203251Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"31122208\" in 1.747467414s" Oct 8 20:03:33.305452 containerd[1468]: time="2024-10-08T20:03:33.305263020Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:38406042cf08513d32e3d0276280fc953d5880565fb9c52bba28f042542da92e\"" Oct 8 20:03:33.343382 containerd[1468]: time="2024-10-08T20:03:33.343313337Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 20:03:33.794206 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:03:33.802497 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:34.083548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:34.090482 (kubelet)[1955]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:34.148458 kubelet[1955]: E1008 20:03:34.148380 1955 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:34.152870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:34.153131 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:34.742130 containerd[1468]: time="2024-10-08T20:03:34.742041674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:34.743857 containerd[1468]: time="2024-10-08T20:03:34.743773612Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=17781903" Oct 8 20:03:34.745340 containerd[1468]: time="2024-10-08T20:03:34.745260527Z" level=info msg="ImageCreate event name:\"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:34.749547 containerd[1468]: time="2024-10-08T20:03:34.749461505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:34.751313 containerd[1468]: time="2024-10-08T20:03:34.751048552Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"19310561\" in 1.407678249s" Oct 8 20:03:34.751313 containerd[1468]: time="2024-10-08T20:03:34.751105039Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:25903461e65c35c6917cc6e1c6e7184954f9c886aab70631395eba0d119dcb6d\"" Oct 8 20:03:34.786041 containerd[1468]: time="2024-10-08T20:03:34.785956222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 20:03:35.914800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176681227.mount: Deactivated successfully. Oct 8 20:03:36.504232 containerd[1468]: time="2024-10-08T20:03:36.504128443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:36.505689 containerd[1468]: time="2024-10-08T20:03:36.505608662Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=29041257" Oct 8 20:03:36.507173 containerd[1468]: time="2024-10-08T20:03:36.507104490Z" level=info msg="ImageCreate event name:\"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:36.509893 containerd[1468]: time="2024-10-08T20:03:36.509849542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:36.511383 containerd[1468]: time="2024-10-08T20:03:36.510795128Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"29038381\" in 1.724774768s" Oct 8 20:03:36.511383 containerd[1468]: time="2024-10-08T20:03:36.510850628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:71161e05b9bb0490ca15080235a4d61f4b9e62554a6fcc70a5839b4dca802682\"" Oct 8 20:03:36.548635 containerd[1468]: time="2024-10-08T20:03:36.548492132Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:03:36.947795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113940488.mount: Deactivated successfully. Oct 8 20:03:38.013043 containerd[1468]: time="2024-10-08T20:03:38.012952930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:38.014781 containerd[1468]: time="2024-10-08T20:03:38.014697417Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Oct 8 20:03:38.016123 containerd[1468]: time="2024-10-08T20:03:38.016044749Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:38.020197 containerd[1468]: time="2024-10-08T20:03:38.020109020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:38.022000 containerd[1468]: time="2024-10-08T20:03:38.021742543Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.473179903s" Oct 8 20:03:38.022000 containerd[1468]: time="2024-10-08T20:03:38.021799183Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 8 20:03:38.057774 containerd[1468]: time="2024-10-08T20:03:38.057697468Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:03:38.483966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673282935.mount: Deactivated successfully. Oct 8 20:03:38.492326 containerd[1468]: time="2024-10-08T20:03:38.492247765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:38.493754 containerd[1468]: time="2024-10-08T20:03:38.493678049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Oct 8 20:03:38.494954 containerd[1468]: time="2024-10-08T20:03:38.494873928Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:38.498123 containerd[1468]: time="2024-10-08T20:03:38.498037396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:38.499177 containerd[1468]: time="2024-10-08T20:03:38.499127424Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 441.374652ms" Oct 8 20:03:38.499311 containerd[1468]: time="2024-10-08T20:03:38.499185115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 8 20:03:38.531094 containerd[1468]: time="2024-10-08T20:03:38.531042127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 20:03:38.949140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274582102.mount: Deactivated successfully. Oct 8 20:03:41.220077 containerd[1468]: time="2024-10-08T20:03:41.219972987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:41.221678 containerd[1468]: time="2024-10-08T20:03:41.221610084Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Oct 8 20:03:41.222834 containerd[1468]: time="2024-10-08T20:03:41.222784806Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:41.229939 containerd[1468]: time="2024-10-08T20:03:41.229860183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:03:41.231853 containerd[1468]: time="2024-10-08T20:03:41.231634728Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.700529081s" Oct 8 20:03:41.231853 containerd[1468]: time="2024-10-08T20:03:41.231693226Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Oct 8 20:03:44.294199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:03:44.303505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:44.607288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:44.616755 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:03:44.698698 kubelet[2152]: E1008 20:03:44.698606 2152 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:03:44.703401 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:03:44.703997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:03:46.007194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:46.014430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:46.051042 systemd[1]: Reloading requested from client PID 2166 ('systemctl') (unit session-7.scope)... Oct 8 20:03:46.051315 systemd[1]: Reloading... Oct 8 20:03:46.223055 zram_generator::config[2207]: No configuration found. Oct 8 20:03:46.401279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:46.508504 systemd[1]: Reloading finished in 456 ms. Oct 8 20:03:46.567360 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:03:46.567513 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:03:46.567914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:46.575482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:46.826748 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:46.838657 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:03:46.906066 kubelet[2257]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:46.906066 kubelet[2257]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:03:46.906066 kubelet[2257]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:46.908106 kubelet[2257]: I1008 20:03:46.907992 2257 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:03:47.741118 kubelet[2257]: I1008 20:03:47.741050 2257 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 20:03:47.741118 kubelet[2257]: I1008 20:03:47.741101 2257 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:03:47.741502 kubelet[2257]: I1008 20:03:47.741467 2257 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 20:03:47.772336 kubelet[2257]: I1008 20:03:47.771963 2257 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:03:47.773542 kubelet[2257]: E1008 20:03:47.772947 2257 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.795918 kubelet[2257]: I1008 20:03:47.795873 2257 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:03:47.797417 kubelet[2257]: I1008 20:03:47.797324 2257 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:03:47.797747 kubelet[2257]: I1008 20:03:47.797405 2257 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:03:47.797951 kubelet[2257]: I1008 20:03:47.797760 2257 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:03:47.797951 kubelet[2257]: I1008 20:03:47.797780 2257 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:03:47.798101 kubelet[2257]: I1008 20:03:47.797998 2257 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:47.799578 kubelet[2257]: I1008 20:03:47.799536 2257 kubelet.go:400] "Attempting to sync node with API server" Oct 8 20:03:47.799715 kubelet[2257]: I1008 20:03:47.799587 2257 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:03:47.799715 kubelet[2257]: I1008 20:03:47.799632 2257 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:03:47.799715 kubelet[2257]: I1008 20:03:47.799670 2257 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:03:47.806845 kubelet[2257]: W1008 20:03:47.806374 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.806845 kubelet[2257]: E1008 20:03:47.806480 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.806845 kubelet[2257]: W1008 20:03:47.806592 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.806845 kubelet[2257]: E1008 20:03:47.806655 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.807158 kubelet[2257]: I1008 20:03:47.807099 2257 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:03:47.809502 kubelet[2257]: I1008 20:03:47.809440 2257 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:03:47.809645 kubelet[2257]: W1008 20:03:47.809537 2257 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:03:47.811057 kubelet[2257]: I1008 20:03:47.810820 2257 server.go:1264] "Started kubelet" Oct 8 20:03:47.814119 kubelet[2257]: I1008 20:03:47.813198 2257 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:03:47.814736 kubelet[2257]: I1008 20:03:47.814693 2257 server.go:455] "Adding debug handlers to kubelet server" Oct 8 20:03:47.818664 kubelet[2257]: I1008 20:03:47.817761 2257 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:03:47.818664 kubelet[2257]: I1008 20:03:47.818216 2257 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:03:47.818664 kubelet[2257]: E1008 20:03:47.818419 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.66:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.66:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal.17fc92db5a1d2f54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,UID:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,},FirstTimestamp:2024-10-08 20:03:47.81078306 +0000 UTC m=+0.965079823,LastTimestamp:2024-10-08 20:03:47.81078306 +0000 UTC m=+0.965079823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,}" Oct 8 20:03:47.820287 kubelet[2257]: I1008 20:03:47.819888 2257 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:03:47.825993 kubelet[2257]: I1008 20:03:47.825968 2257 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:03:47.827952 kubelet[2257]: I1008 20:03:47.827069 2257 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 20:03:47.827952 kubelet[2257]: I1008 20:03:47.827161 2257 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:03:47.827952 kubelet[2257]: W1008 20:03:47.827744 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.827952 kubelet[2257]: E1008 20:03:47.827822 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.829701 kubelet[2257]: E1008 20:03:47.829449 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.66:6443: connect: connection refused" interval="200ms" Oct 8 20:03:47.830855 kubelet[2257]: E1008 20:03:47.830629 2257 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:03:47.831897 kubelet[2257]: I1008 20:03:47.831867 2257 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:03:47.831897 kubelet[2257]: I1008 20:03:47.831897 2257 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:03:47.832085 kubelet[2257]: I1008 20:03:47.831982 2257 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:03:47.851513 kubelet[2257]: I1008 20:03:47.851105 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:03:47.854087 kubelet[2257]: I1008 20:03:47.853634 2257 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:03:47.854087 kubelet[2257]: I1008 20:03:47.853677 2257 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:03:47.854087 kubelet[2257]: I1008 20:03:47.853726 2257 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 20:03:47.854087 kubelet[2257]: E1008 20:03:47.853781 2257 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:03:47.866061 kubelet[2257]: W1008 20:03:47.865801 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.866061 kubelet[2257]: E1008 20:03:47.865954 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:47.881650 kubelet[2257]: I1008 20:03:47.881589 2257 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:03:47.881993 kubelet[2257]: I1008 20:03:47.881613 2257 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:03:47.881993 kubelet[2257]: I1008 20:03:47.881866 2257 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:47.886259 kubelet[2257]: I1008 20:03:47.886115 2257 policy_none.go:49] "None policy: Start" Oct 8 20:03:47.887498 kubelet[2257]: I1008 20:03:47.887448 2257 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:03:47.887498 kubelet[2257]: I1008 20:03:47.887497 2257 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:03:47.897123 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:03:47.910762 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:03:47.916105 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:03:47.928830 kubelet[2257]: I1008 20:03:47.928418 2257 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:03:47.929447 kubelet[2257]: I1008 20:03:47.929110 2257 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:03:47.929447 kubelet[2257]: I1008 20:03:47.929342 2257 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:03:47.932253 kubelet[2257]: E1008 20:03:47.932157 2257 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" not found" Oct 8 20:03:47.933956 kubelet[2257]: I1008 20:03:47.933897 2257 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:47.934499 kubelet[2257]: E1008 20:03:47.934443 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:47.954338 kubelet[2257]: I1008 20:03:47.954208 2257 topology_manager.go:215] "Topology Admit Handler" podUID="77e5928420390280d3f9367450b61a5b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:47.962466 kubelet[2257]: I1008 20:03:47.962191 2257 topology_manager.go:215] "Topology Admit Handler" podUID="80a8e46fef918864d8c8abe2c4e8e230" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:47.978440 kubelet[2257]: I1008 20:03:47.978051 2257 topology_manager.go:215] "Topology Admit Handler" podUID="976b65e805adf15d72f79fb21d17ec58" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:47.985787 systemd[1]: Created slice kubepods-burstable-pod77e5928420390280d3f9367450b61a5b.slice - libcontainer container kubepods-burstable-pod77e5928420390280d3f9367450b61a5b.slice. Oct 8 20:03:48.007303 systemd[1]: Created slice kubepods-burstable-pod80a8e46fef918864d8c8abe2c4e8e230.slice - libcontainer container kubepods-burstable-pod80a8e46fef918864d8c8abe2c4e8e230.slice. Oct 8 20:03:48.016157 systemd[1]: Created slice kubepods-burstable-pod976b65e805adf15d72f79fb21d17ec58.slice - libcontainer container kubepods-burstable-pod976b65e805adf15d72f79fb21d17ec58.slice. Oct 8 20:03:48.030494 kubelet[2257]: E1008 20:03:48.030428 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.66:6443: connect: connection refused" interval="400ms" Oct 8 20:03:48.128214 kubelet[2257]: I1008 20:03:48.128083 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77e5928420390280d3f9367450b61a5b-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"77e5928420390280d3f9367450b61a5b\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128214 kubelet[2257]: I1008 20:03:48.128174 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77e5928420390280d3f9367450b61a5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"77e5928420390280d3f9367450b61a5b\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128214 kubelet[2257]: I1008 20:03:48.128214 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128711 kubelet[2257]: I1008 20:03:48.128245 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/976b65e805adf15d72f79fb21d17ec58-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"976b65e805adf15d72f79fb21d17ec58\") " pod="kube-system/kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128711 kubelet[2257]: I1008 20:03:48.128277 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77e5928420390280d3f9367450b61a5b-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"77e5928420390280d3f9367450b61a5b\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128711 kubelet[2257]: I1008 20:03:48.128308 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128711 kubelet[2257]: I1008 20:03:48.128343 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128843 kubelet[2257]: I1008 20:03:48.128371 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.128843 kubelet[2257]: I1008 20:03:48.128408 2257 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.150289 kubelet[2257]: I1008 20:03:48.150242 2257 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.150982 kubelet[2257]: E1008 20:03:48.150935 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.302284 containerd[1468]: time="2024-10-08T20:03:48.302205732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,Uid:77e5928420390280d3f9367450b61a5b,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:48.319637 containerd[1468]: time="2024-10-08T20:03:48.319325227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,Uid:80a8e46fef918864d8c8abe2c4e8e230,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:48.320339 containerd[1468]: time="2024-10-08T20:03:48.320278041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,Uid:976b65e805adf15d72f79fb21d17ec58,Namespace:kube-system,Attempt:0,}" Oct 8 20:03:48.431981 kubelet[2257]: E1008 20:03:48.431889 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.66:6443: connect: connection refused" interval="800ms" Oct 8 20:03:48.557965 kubelet[2257]: I1008 20:03:48.557765 2257 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.558352 kubelet[2257]: E1008 20:03:48.558289 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:48.676956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275053418.mount: Deactivated successfully. Oct 8 20:03:48.687175 containerd[1468]: time="2024-10-08T20:03:48.687094561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:48.688564 containerd[1468]: time="2024-10-08T20:03:48.688500322Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:48.689848 containerd[1468]: time="2024-10-08T20:03:48.689780602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:03:48.691104 containerd[1468]: time="2024-10-08T20:03:48.691029127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Oct 8 20:03:48.692717 containerd[1468]: time="2024-10-08T20:03:48.692559092Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:48.694290 containerd[1468]: time="2024-10-08T20:03:48.694233191Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:48.694954 containerd[1468]: time="2024-10-08T20:03:48.694815995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:03:48.698565 containerd[1468]: time="2024-10-08T20:03:48.698488934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:03:48.700584 containerd[1468]: time="2024-10-08T20:03:48.700250335Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 379.864319ms" Oct 8 20:03:48.701818 containerd[1468]: time="2024-10-08T20:03:48.701770785Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 382.307837ms" Oct 8 20:03:48.704570 containerd[1468]: time="2024-10-08T20:03:48.704512484Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 402.156231ms" Oct 8 20:03:48.940426 containerd[1468]: time="2024-10-08T20:03:48.939561138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:48.940426 containerd[1468]: time="2024-10-08T20:03:48.939639302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:48.940426 containerd[1468]: time="2024-10-08T20:03:48.939676814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:48.940426 containerd[1468]: time="2024-10-08T20:03:48.939876267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941523192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941636545Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941665497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941788620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941032045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941142404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941170119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:48.943273 containerd[1468]: time="2024-10-08T20:03:48.941319435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:03:48.987369 systemd[1]: Started cri-containerd-a7fcf9b57931614d586b8df3d27fb97c6c65c2be45186ad56e8456727637996e.scope - libcontainer container a7fcf9b57931614d586b8df3d27fb97c6c65c2be45186ad56e8456727637996e. Oct 8 20:03:49.007399 systemd[1]: Started cri-containerd-56ec8db84cce264ef3727c5a6a45829f3dd6f6a75e858ab46725bb98961deabe.scope - libcontainer container 56ec8db84cce264ef3727c5a6a45829f3dd6f6a75e858ab46725bb98961deabe. Oct 8 20:03:49.011751 systemd[1]: Started cri-containerd-8092a8096f689f6d097ad1bb0c1f83f954f1cd46b924830f9bf89906a4b986ea.scope - libcontainer container 8092a8096f689f6d097ad1bb0c1f83f954f1cd46b924830f9bf89906a4b986ea. Oct 8 20:03:49.090487 containerd[1468]: time="2024-10-08T20:03:49.090222824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,Uid:77e5928420390280d3f9367450b61a5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7fcf9b57931614d586b8df3d27fb97c6c65c2be45186ad56e8456727637996e\"" Oct 8 20:03:49.090644 kubelet[2257]: E1008 20:03:49.090240 2257 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.66:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.66:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal.17fc92db5a1d2f54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,UID:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,},FirstTimestamp:2024-10-08 20:03:47.81078306 +0000 UTC m=+0.965079823,LastTimestamp:2024-10-08 20:03:47.81078306 +0000 UTC m=+0.965079823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,}" Oct 8 20:03:49.097830 kubelet[2257]: E1008 20:03:49.097682 2257 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-21291" Oct 8 20:03:49.106138 containerd[1468]: time="2024-10-08T20:03:49.106088094Z" level=info msg="CreateContainer within sandbox \"a7fcf9b57931614d586b8df3d27fb97c6c65c2be45186ad56e8456727637996e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:03:49.126385 containerd[1468]: time="2024-10-08T20:03:49.126085630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,Uid:976b65e805adf15d72f79fb21d17ec58,Namespace:kube-system,Attempt:0,} returns sandbox id \"56ec8db84cce264ef3727c5a6a45829f3dd6f6a75e858ab46725bb98961deabe\"" Oct 8 20:03:49.131373 kubelet[2257]: E1008 20:03:49.129301 2257 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-21291" Oct 8 20:03:49.131373 kubelet[2257]: W1008 20:03:49.130535 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.131373 kubelet[2257]: E1008 20:03:49.130638 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.132899 containerd[1468]: time="2024-10-08T20:03:49.132834331Z" level=info msg="CreateContainer within sandbox \"56ec8db84cce264ef3727c5a6a45829f3dd6f6a75e858ab46725bb98961deabe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:03:49.137576 containerd[1468]: time="2024-10-08T20:03:49.137513887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,Uid:80a8e46fef918864d8c8abe2c4e8e230,Namespace:kube-system,Attempt:0,} returns sandbox id \"8092a8096f689f6d097ad1bb0c1f83f954f1cd46b924830f9bf89906a4b986ea\"" Oct 8 20:03:49.140346 kubelet[2257]: E1008 20:03:49.140305 2257 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flat" Oct 8 20:03:49.142684 containerd[1468]: time="2024-10-08T20:03:49.142642049Z" level=info msg="CreateContainer within sandbox \"8092a8096f689f6d097ad1bb0c1f83f954f1cd46b924830f9bf89906a4b986ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:03:49.146060 kubelet[2257]: W1008 20:03:49.145901 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.146060 kubelet[2257]: E1008 20:03:49.145990 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.159777 containerd[1468]: time="2024-10-08T20:03:49.159668689Z" level=info msg="CreateContainer within sandbox \"a7fcf9b57931614d586b8df3d27fb97c6c65c2be45186ad56e8456727637996e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"73820bc18d806fa89b01b7b0c09a326b2adff4c22329e99a968b753e2d9b8007\"" Oct 8 20:03:49.160870 containerd[1468]: time="2024-10-08T20:03:49.160804961Z" level=info msg="StartContainer for \"73820bc18d806fa89b01b7b0c09a326b2adff4c22329e99a968b753e2d9b8007\"" Oct 8 20:03:49.169403 containerd[1468]: time="2024-10-08T20:03:49.169288871Z" level=info msg="CreateContainer within sandbox \"56ec8db84cce264ef3727c5a6a45829f3dd6f6a75e858ab46725bb98961deabe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a399c1abe5e0cebb8fba9eb04d0c2be39bd4681cb6934ae9cb573056e76f358\"" Oct 8 20:03:49.170448 containerd[1468]: time="2024-10-08T20:03:49.170337004Z" level=info msg="StartContainer for \"2a399c1abe5e0cebb8fba9eb04d0c2be39bd4681cb6934ae9cb573056e76f358\"" Oct 8 20:03:49.205331 systemd[1]: Started cri-containerd-73820bc18d806fa89b01b7b0c09a326b2adff4c22329e99a968b753e2d9b8007.scope - libcontainer container 73820bc18d806fa89b01b7b0c09a326b2adff4c22329e99a968b753e2d9b8007. Oct 8 20:03:49.226261 containerd[1468]: time="2024-10-08T20:03:49.223176735Z" level=info msg="CreateContainer within sandbox \"8092a8096f689f6d097ad1bb0c1f83f954f1cd46b924830f9bf89906a4b986ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"afc6ad0f2abeeba5812aeb80678051b54f508141d975762c28b02fa54fd119b9\"" Oct 8 20:03:49.226261 containerd[1468]: time="2024-10-08T20:03:49.224253699Z" level=info msg="StartContainer for \"afc6ad0f2abeeba5812aeb80678051b54f508141d975762c28b02fa54fd119b9\"" Oct 8 20:03:49.224989 systemd[1]: Started cri-containerd-2a399c1abe5e0cebb8fba9eb04d0c2be39bd4681cb6934ae9cb573056e76f358.scope - libcontainer container 2a399c1abe5e0cebb8fba9eb04d0c2be39bd4681cb6934ae9cb573056e76f358. Oct 8 20:03:49.233411 kubelet[2257]: E1008 20:03:49.233303 2257 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.66:6443: connect: connection refused" interval="1.6s" Oct 8 20:03:49.286464 kubelet[2257]: W1008 20:03:49.286376 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.286464 kubelet[2257]: E1008 20:03:49.286472 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.289655 systemd[1]: Started cri-containerd-afc6ad0f2abeeba5812aeb80678051b54f508141d975762c28b02fa54fd119b9.scope - libcontainer container afc6ad0f2abeeba5812aeb80678051b54f508141d975762c28b02fa54fd119b9. Oct 8 20:03:49.347591 containerd[1468]: time="2024-10-08T20:03:49.347526754Z" level=info msg="StartContainer for \"2a399c1abe5e0cebb8fba9eb04d0c2be39bd4681cb6934ae9cb573056e76f358\" returns successfully" Oct 8 20:03:49.348582 containerd[1468]: time="2024-10-08T20:03:49.348103550Z" level=info msg="StartContainer for \"73820bc18d806fa89b01b7b0c09a326b2adff4c22329e99a968b753e2d9b8007\" returns successfully" Oct 8 20:03:49.361703 kubelet[2257]: W1008 20:03:49.361455 2257 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.361703 kubelet[2257]: E1008 20:03:49.361563 2257 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.66:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.66:6443: connect: connection refused Oct 8 20:03:49.369207 kubelet[2257]: I1008 20:03:49.369126 2257 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:49.369795 kubelet[2257]: E1008 20:03:49.369738 2257 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.66:6443/api/v1/nodes\": dial tcp 10.128.0.66:6443: connect: connection refused" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:49.415650 containerd[1468]: time="2024-10-08T20:03:49.415448468Z" level=info msg="StartContainer for \"afc6ad0f2abeeba5812aeb80678051b54f508141d975762c28b02fa54fd119b9\" returns successfully" Oct 8 20:03:50.662029 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 8 20:03:50.976598 kubelet[2257]: I1008 20:03:50.976088 2257 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:53.244917 kubelet[2257]: E1008 20:03:53.244850 2257 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" not found" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:53.282082 kubelet[2257]: I1008 20:03:53.281222 2257 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:53.810692 kubelet[2257]: I1008 20:03:53.808724 2257 apiserver.go:52] "Watching apiserver" Oct 8 20:03:53.827309 kubelet[2257]: I1008 20:03:53.827252 2257 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 20:03:55.039776 kubelet[2257]: W1008 20:03:55.039721 2257 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Oct 8 20:03:55.303232 systemd[1]: Reloading requested from client PID 2532 ('systemctl') (unit session-7.scope)... Oct 8 20:03:55.303259 systemd[1]: Reloading... Oct 8 20:03:55.475063 zram_generator::config[2578]: No configuration found. Oct 8 20:03:55.624968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:03:55.771121 systemd[1]: Reloading finished in 467 ms. Oct 8 20:03:55.823886 kubelet[2257]: E1008 20:03:55.823421 2257 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal.17fc92db5a1d2f54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,UID:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,},FirstTimestamp:2024-10-08 20:03:47.81078306 +0000 UTC m=+0.965079823,LastTimestamp:2024-10-08 20:03:47.81078306 +0000 UTC m=+0.965079823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal,}" Oct 8 20:03:55.824269 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:55.843457 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:03:55.843840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:55.843950 systemd[1]: kubelet.service: Consumed 1.529s CPU time, 112.6M memory peak, 0B memory swap peak. Oct 8 20:03:55.850823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:03:56.151296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:03:56.162632 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:03:56.246238 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:56.246238 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:03:56.246238 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:03:56.246827 kubelet[2620]: I1008 20:03:56.246388 2620 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:03:56.257037 kubelet[2620]: I1008 20:03:56.256934 2620 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 20:03:56.257037 kubelet[2620]: I1008 20:03:56.256973 2620 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:03:56.257452 kubelet[2620]: I1008 20:03:56.257411 2620 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 20:03:56.259349 kubelet[2620]: I1008 20:03:56.259311 2620 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:03:56.261230 kubelet[2620]: I1008 20:03:56.260999 2620 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:03:56.282085 kubelet[2620]: I1008 20:03:56.281582 2620 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:03:56.282610 kubelet[2620]: I1008 20:03:56.282476 2620 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:03:56.283006 kubelet[2620]: I1008 20:03:56.282529 2620 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:03:56.283690 kubelet[2620]: I1008 20:03:56.283266 2620 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:03:56.283690 kubelet[2620]: I1008 20:03:56.283296 2620 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:03:56.283690 kubelet[2620]: I1008 20:03:56.283380 2620 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:56.283690 kubelet[2620]: I1008 20:03:56.283536 2620 kubelet.go:400] "Attempting to sync node with API server" Oct 8 20:03:56.283690 kubelet[2620]: I1008 20:03:56.283562 2620 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:03:56.283690 kubelet[2620]: I1008 20:03:56.283615 2620 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:03:56.283690 kubelet[2620]: I1008 20:03:56.283645 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:03:56.286927 kubelet[2620]: I1008 20:03:56.286859 2620 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 20:03:56.288370 kubelet[2620]: I1008 20:03:56.287321 2620 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:03:56.288370 kubelet[2620]: I1008 20:03:56.288149 2620 server.go:1264] "Started kubelet" Oct 8 20:03:56.297421 kubelet[2620]: I1008 20:03:56.297345 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:03:56.298552 kubelet[2620]: I1008 20:03:56.298527 2620 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:03:56.299065 kubelet[2620]: I1008 20:03:56.298739 2620 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:03:56.299210 kubelet[2620]: I1008 20:03:56.299172 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:03:56.310817 kubelet[2620]: I1008 20:03:56.310781 2620 server.go:455] "Adding debug handlers to kubelet server" Oct 8 20:03:56.321619 kubelet[2620]: I1008 20:03:56.314960 2620 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:03:56.325299 kubelet[2620]: I1008 20:03:56.323803 2620 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:03:56.325299 kubelet[2620]: I1008 20:03:56.324036 2620 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:03:56.347490 kubelet[2620]: I1008 20:03:56.314988 2620 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 20:03:56.347490 kubelet[2620]: I1008 20:03:56.346450 2620 reconciler.go:26] "Reconciler: start to sync state" Oct 8 20:03:56.361526 kubelet[2620]: E1008 20:03:56.356468 2620 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:03:56.361526 kubelet[2620]: I1008 20:03:56.360992 2620 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:03:56.388534 kubelet[2620]: I1008 20:03:56.388460 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:03:56.390786 kubelet[2620]: I1008 20:03:56.390751 2620 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:03:56.391152 kubelet[2620]: I1008 20:03:56.390973 2620 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:03:56.391653 kubelet[2620]: I1008 20:03:56.391005 2620 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 20:03:56.391653 kubelet[2620]: E1008 20:03:56.391336 2620 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:03:56.431396 kubelet[2620]: I1008 20:03:56.429899 2620 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.452678 kubelet[2620]: I1008 20:03:56.452212 2620 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.452678 kubelet[2620]: I1008 20:03:56.452490 2620 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.489717 kubelet[2620]: I1008 20:03:56.488833 2620 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:03:56.489717 kubelet[2620]: I1008 20:03:56.488859 2620 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:03:56.489717 kubelet[2620]: I1008 20:03:56.488911 2620 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:03:56.489717 kubelet[2620]: I1008 20:03:56.489216 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:03:56.489717 kubelet[2620]: I1008 20:03:56.489234 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:03:56.489717 kubelet[2620]: I1008 20:03:56.489266 2620 policy_none.go:49] "None policy: Start" Oct 8 20:03:56.491473 kubelet[2620]: I1008 20:03:56.490468 2620 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:03:56.491473 kubelet[2620]: I1008 20:03:56.490519 2620 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:03:56.491473 kubelet[2620]: I1008 20:03:56.490891 2620 state_mem.go:75] "Updated machine memory state" Oct 8 20:03:56.491826 kubelet[2620]: E1008 20:03:56.491785 2620 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:03:56.502293 kubelet[2620]: I1008 20:03:56.500625 2620 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:03:56.502293 kubelet[2620]: I1008 20:03:56.500885 2620 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 20:03:56.502293 kubelet[2620]: I1008 20:03:56.501404 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:03:56.692446 kubelet[2620]: I1008 20:03:56.692273 2620 topology_manager.go:215] "Topology Admit Handler" podUID="77e5928420390280d3f9367450b61a5b" podNamespace="kube-system" podName="kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.692446 kubelet[2620]: I1008 20:03:56.692436 2620 topology_manager.go:215] "Topology Admit Handler" podUID="80a8e46fef918864d8c8abe2c4e8e230" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.695047 kubelet[2620]: I1008 20:03:56.694718 2620 topology_manager.go:215] "Topology Admit Handler" podUID="976b65e805adf15d72f79fb21d17ec58" podNamespace="kube-system" podName="kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.702007 kubelet[2620]: W1008 20:03:56.701533 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Oct 8 20:03:56.703332 kubelet[2620]: W1008 20:03:56.703095 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Oct 8 20:03:56.704827 kubelet[2620]: W1008 20:03:56.704261 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Oct 8 20:03:56.704827 kubelet[2620]: E1008 20:03:56.704347 2620 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749113 kubelet[2620]: I1008 20:03:56.748618 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/77e5928420390280d3f9367450b61a5b-k8s-certs\") pod \"kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"77e5928420390280d3f9367450b61a5b\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749113 kubelet[2620]: I1008 20:03:56.748686 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-ca-certs\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749113 kubelet[2620]: I1008 20:03:56.748729 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749113 kubelet[2620]: I1008 20:03:56.748771 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/976b65e805adf15d72f79fb21d17ec58-kubeconfig\") pod \"kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"976b65e805adf15d72f79fb21d17ec58\") " pod="kube-system/kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749420 kubelet[2620]: I1008 20:03:56.748822 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/77e5928420390280d3f9367450b61a5b-ca-certs\") pod \"kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"77e5928420390280d3f9367450b61a5b\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749420 kubelet[2620]: I1008 20:03:56.748873 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/77e5928420390280d3f9367450b61a5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"77e5928420390280d3f9367450b61a5b\") " pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749420 kubelet[2620]: I1008 20:03:56.748910 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-k8s-certs\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749420 kubelet[2620]: I1008 20:03:56.748940 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-kubeconfig\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:56.749553 kubelet[2620]: I1008 20:03:56.748972 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80a8e46fef918864d8c8abe2c4e8e230-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" (UID: \"80a8e46fef918864d8c8abe2c4e8e230\") " pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:57.286595 kubelet[2620]: I1008 20:03:57.286160 2620 apiserver.go:52] "Watching apiserver" Oct 8 20:03:57.347171 kubelet[2620]: I1008 20:03:57.347073 2620 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 20:03:57.484360 kubelet[2620]: W1008 20:03:57.484316 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Oct 8 20:03:57.484626 kubelet[2620]: E1008 20:03:57.484435 2620 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:57.490956 kubelet[2620]: W1008 20:03:57.490902 2620 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Oct 8 20:03:57.491212 kubelet[2620]: E1008 20:03:57.491038 2620 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:03:57.526593 kubelet[2620]: I1008 20:03:57.526212 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" podStartSLOduration=1.526176416 podStartE2EDuration="1.526176416s" podCreationTimestamp="2024-10-08 20:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:03:57.500462976 +0000 UTC m=+1.330511569" watchObservedRunningTime="2024-10-08 20:03:57.526176416 +0000 UTC m=+1.356225004" Oct 8 20:03:57.552141 kubelet[2620]: I1008 20:03:57.551834 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" podStartSLOduration=2.551806854 podStartE2EDuration="2.551806854s" podCreationTimestamp="2024-10-08 20:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:03:57.551422964 +0000 UTC m=+1.381471561" watchObservedRunningTime="2024-10-08 20:03:57.551806854 +0000 UTC m=+1.381855446" Oct 8 20:03:57.552141 kubelet[2620]: I1008 20:03:57.551978 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" podStartSLOduration=1.551966848 podStartE2EDuration="1.551966848s" podCreationTimestamp="2024-10-08 20:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:03:57.527656456 +0000 UTC m=+1.357705049" watchObservedRunningTime="2024-10-08 20:03:57.551966848 +0000 UTC m=+1.382015439" Oct 8 20:04:02.748983 sudo[1718]: pam_unix(sudo:session): session closed for user root Oct 8 20:04:02.807577 sshd[1715]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:02.814102 systemd[1]: sshd@6-10.128.0.66:22-139.178.68.195:36004.service: Deactivated successfully. Oct 8 20:04:02.816847 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:04:02.817233 systemd[1]: session-7.scope: Consumed 8.085s CPU time, 192.3M memory peak, 0B memory swap peak. Oct 8 20:04:02.818271 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:04:02.819981 systemd-logind[1449]: Removed session 7. Oct 8 20:04:04.497190 update_engine[1457]: I20241008 20:04:04.497066 1457 update_attempter.cc:509] Updating boot flags... Oct 8 20:04:04.570048 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2707) Oct 8 20:04:04.686188 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2709) Oct 8 20:04:04.835689 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2709) Oct 8 20:04:10.772806 kubelet[2620]: I1008 20:04:10.772760 2620 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:04:10.775409 containerd[1468]: time="2024-10-08T20:04:10.774698855Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:04:10.775946 kubelet[2620]: I1008 20:04:10.775078 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:04:11.421082 kubelet[2620]: I1008 20:04:11.420977 2620 topology_manager.go:215] "Topology Admit Handler" podUID="d6facaed-23eb-4108-a983-554c17ad7e67" podNamespace="kube-system" podName="kube-proxy-72dz8" Oct 8 20:04:11.440154 systemd[1]: Created slice kubepods-besteffort-podd6facaed_23eb_4108_a983_554c17ad7e67.slice - libcontainer container kubepods-besteffort-podd6facaed_23eb_4108_a983_554c17ad7e67.slice. Oct 8 20:04:11.544201 kubelet[2620]: I1008 20:04:11.544062 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvhjm\" (UniqueName: \"kubernetes.io/projected/d6facaed-23eb-4108-a983-554c17ad7e67-kube-api-access-gvhjm\") pod \"kube-proxy-72dz8\" (UID: \"d6facaed-23eb-4108-a983-554c17ad7e67\") " pod="kube-system/kube-proxy-72dz8" Oct 8 20:04:11.544201 kubelet[2620]: I1008 20:04:11.544166 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6facaed-23eb-4108-a983-554c17ad7e67-kube-proxy\") pod \"kube-proxy-72dz8\" (UID: \"d6facaed-23eb-4108-a983-554c17ad7e67\") " pod="kube-system/kube-proxy-72dz8" Oct 8 20:04:11.544606 kubelet[2620]: I1008 20:04:11.544279 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6facaed-23eb-4108-a983-554c17ad7e67-xtables-lock\") pod \"kube-proxy-72dz8\" (UID: \"d6facaed-23eb-4108-a983-554c17ad7e67\") " pod="kube-system/kube-proxy-72dz8" Oct 8 20:04:11.544606 kubelet[2620]: I1008 20:04:11.544317 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6facaed-23eb-4108-a983-554c17ad7e67-lib-modules\") pod \"kube-proxy-72dz8\" (UID: \"d6facaed-23eb-4108-a983-554c17ad7e67\") " pod="kube-system/kube-proxy-72dz8" Oct 8 20:04:11.751241 containerd[1468]: time="2024-10-08T20:04:11.751047150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72dz8,Uid:d6facaed-23eb-4108-a983-554c17ad7e67,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:11.799940 containerd[1468]: time="2024-10-08T20:04:11.798920607Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:11.799940 containerd[1468]: time="2024-10-08T20:04:11.799004859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:11.799940 containerd[1468]: time="2024-10-08T20:04:11.799040823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:11.800842 containerd[1468]: time="2024-10-08T20:04:11.799953917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:11.838364 systemd[1]: Started cri-containerd-a58e9d1647773f6f429c7487903930f254de897f690ffac94d8bf496a5aac44c.scope - libcontainer container a58e9d1647773f6f429c7487903930f254de897f690ffac94d8bf496a5aac44c. Oct 8 20:04:11.933190 containerd[1468]: time="2024-10-08T20:04:11.933005602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72dz8,Uid:d6facaed-23eb-4108-a983-554c17ad7e67,Namespace:kube-system,Attempt:0,} returns sandbox id \"a58e9d1647773f6f429c7487903930f254de897f690ffac94d8bf496a5aac44c\"" Oct 8 20:04:11.940952 containerd[1468]: time="2024-10-08T20:04:11.940310129Z" level=info msg="CreateContainer within sandbox \"a58e9d1647773f6f429c7487903930f254de897f690ffac94d8bf496a5aac44c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:04:11.945466 kubelet[2620]: I1008 20:04:11.944980 2620 topology_manager.go:215] "Topology Admit Handler" podUID="7a1452a2-51f8-43e1-bcfc-2655ca2d852b" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-qcsgr" Oct 8 20:04:11.967810 systemd[1]: Created slice kubepods-besteffort-pod7a1452a2_51f8_43e1_bcfc_2655ca2d852b.slice - libcontainer container kubepods-besteffort-pod7a1452a2_51f8_43e1_bcfc_2655ca2d852b.slice. Oct 8 20:04:11.983320 containerd[1468]: time="2024-10-08T20:04:11.983248014Z" level=info msg="CreateContainer within sandbox \"a58e9d1647773f6f429c7487903930f254de897f690ffac94d8bf496a5aac44c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1403817df8f703a7b37f77be1026dc49c079cc28c35e1dee56dc23c818f330ec\"" Oct 8 20:04:11.984600 containerd[1468]: time="2024-10-08T20:04:11.984554804Z" level=info msg="StartContainer for \"1403817df8f703a7b37f77be1026dc49c079cc28c35e1dee56dc23c818f330ec\"" Oct 8 20:04:12.032296 systemd[1]: Started cri-containerd-1403817df8f703a7b37f77be1026dc49c079cc28c35e1dee56dc23c818f330ec.scope - libcontainer container 1403817df8f703a7b37f77be1026dc49c079cc28c35e1dee56dc23c818f330ec. Oct 8 20:04:12.048386 kubelet[2620]: I1008 20:04:12.048262 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a1452a2-51f8-43e1-bcfc-2655ca2d852b-var-lib-calico\") pod \"tigera-operator-77f994b5bb-qcsgr\" (UID: \"7a1452a2-51f8-43e1-bcfc-2655ca2d852b\") " pod="tigera-operator/tigera-operator-77f994b5bb-qcsgr" Oct 8 20:04:12.048991 kubelet[2620]: I1008 20:04:12.048792 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8wc9\" (UniqueName: \"kubernetes.io/projected/7a1452a2-51f8-43e1-bcfc-2655ca2d852b-kube-api-access-q8wc9\") pod \"tigera-operator-77f994b5bb-qcsgr\" (UID: \"7a1452a2-51f8-43e1-bcfc-2655ca2d852b\") " pod="tigera-operator/tigera-operator-77f994b5bb-qcsgr" Oct 8 20:04:12.082031 containerd[1468]: time="2024-10-08T20:04:12.081606685Z" level=info msg="StartContainer for \"1403817df8f703a7b37f77be1026dc49c079cc28c35e1dee56dc23c818f330ec\" returns successfully" Oct 8 20:04:12.276738 containerd[1468]: time="2024-10-08T20:04:12.276654518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-qcsgr,Uid:7a1452a2-51f8-43e1-bcfc-2655ca2d852b,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:04:12.324508 containerd[1468]: time="2024-10-08T20:04:12.323772186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:12.324508 containerd[1468]: time="2024-10-08T20:04:12.323881316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:12.324508 containerd[1468]: time="2024-10-08T20:04:12.323921795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:12.324508 containerd[1468]: time="2024-10-08T20:04:12.324125168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:12.360618 systemd[1]: Started cri-containerd-ef92ca1ca28d619d8bd51242f543bb54d6e98e1d2370dfb0fad35ae1bbbc0e4c.scope - libcontainer container ef92ca1ca28d619d8bd51242f543bb54d6e98e1d2370dfb0fad35ae1bbbc0e4c. Oct 8 20:04:12.456723 containerd[1468]: time="2024-10-08T20:04:12.456620703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-qcsgr,Uid:7a1452a2-51f8-43e1-bcfc-2655ca2d852b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ef92ca1ca28d619d8bd51242f543bb54d6e98e1d2370dfb0fad35ae1bbbc0e4c\"" Oct 8 20:04:12.465217 containerd[1468]: time="2024-10-08T20:04:12.463300861Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:04:12.513801 kubelet[2620]: I1008 20:04:12.512358 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72dz8" podStartSLOduration=1.512322029 podStartE2EDuration="1.512322029s" podCreationTimestamp="2024-10-08 20:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:12.511267921 +0000 UTC m=+16.341316511" watchObservedRunningTime="2024-10-08 20:04:12.512322029 +0000 UTC m=+16.342370621" Oct 8 20:04:13.594079 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2506519451.mount: Deactivated successfully. Oct 8 20:04:14.876058 containerd[1468]: time="2024-10-08T20:04:14.875968704Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:14.877607 containerd[1468]: time="2024-10-08T20:04:14.877526874Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136497" Oct 8 20:04:14.879538 containerd[1468]: time="2024-10-08T20:04:14.879454525Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:14.882867 containerd[1468]: time="2024-10-08T20:04:14.882755638Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:14.884043 containerd[1468]: time="2024-10-08T20:04:14.883954996Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 2.420573864s" Oct 8 20:04:14.884043 containerd[1468]: time="2024-10-08T20:04:14.884032132Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 8 20:04:14.887935 containerd[1468]: time="2024-10-08T20:04:14.887768098Z" level=info msg="CreateContainer within sandbox \"ef92ca1ca28d619d8bd51242f543bb54d6e98e1d2370dfb0fad35ae1bbbc0e4c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:04:14.907422 containerd[1468]: time="2024-10-08T20:04:14.907359742Z" level=info msg="CreateContainer within sandbox \"ef92ca1ca28d619d8bd51242f543bb54d6e98e1d2370dfb0fad35ae1bbbc0e4c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e8191cfe8c085bb95388e9682e3ed94234d04f8f8627cccdcf3c751e7a2d9746\"" Oct 8 20:04:14.910051 containerd[1468]: time="2024-10-08T20:04:14.909186143Z" level=info msg="StartContainer for \"e8191cfe8c085bb95388e9682e3ed94234d04f8f8627cccdcf3c751e7a2d9746\"" Oct 8 20:04:14.956433 systemd[1]: run-containerd-runc-k8s.io-e8191cfe8c085bb95388e9682e3ed94234d04f8f8627cccdcf3c751e7a2d9746-runc.1b1rZq.mount: Deactivated successfully. Oct 8 20:04:14.964253 systemd[1]: Started cri-containerd-e8191cfe8c085bb95388e9682e3ed94234d04f8f8627cccdcf3c751e7a2d9746.scope - libcontainer container e8191cfe8c085bb95388e9682e3ed94234d04f8f8627cccdcf3c751e7a2d9746. Oct 8 20:04:15.005517 containerd[1468]: time="2024-10-08T20:04:15.005449227Z" level=info msg="StartContainer for \"e8191cfe8c085bb95388e9682e3ed94234d04f8f8627cccdcf3c751e7a2d9746\" returns successfully" Oct 8 20:04:18.094080 kubelet[2620]: I1008 20:04:18.092895 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-qcsgr" podStartSLOduration=4.66827246 podStartE2EDuration="7.092857334s" podCreationTimestamp="2024-10-08 20:04:11 +0000 UTC" firstStartedPulling="2024-10-08 20:04:12.460927393 +0000 UTC m=+16.290975963" lastFinishedPulling="2024-10-08 20:04:14.885512261 +0000 UTC m=+18.715560837" observedRunningTime="2024-10-08 20:04:15.519208459 +0000 UTC m=+19.349257050" watchObservedRunningTime="2024-10-08 20:04:18.092857334 +0000 UTC m=+21.922905926" Oct 8 20:04:18.094080 kubelet[2620]: I1008 20:04:18.093245 2620 topology_manager.go:215] "Topology Admit Handler" podUID="d0809075-7196-4067-94a1-21105b7b4f38" podNamespace="calico-system" podName="calico-typha-c9f7956b8-7rcn2" Oct 8 20:04:18.114822 systemd[1]: Created slice kubepods-besteffort-podd0809075_7196_4067_94a1_21105b7b4f38.slice - libcontainer container kubepods-besteffort-podd0809075_7196_4067_94a1_21105b7b4f38.slice. Oct 8 20:04:18.196869 kubelet[2620]: I1008 20:04:18.196805 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bc5v\" (UniqueName: \"kubernetes.io/projected/d0809075-7196-4067-94a1-21105b7b4f38-kube-api-access-6bc5v\") pod \"calico-typha-c9f7956b8-7rcn2\" (UID: \"d0809075-7196-4067-94a1-21105b7b4f38\") " pod="calico-system/calico-typha-c9f7956b8-7rcn2" Oct 8 20:04:18.197167 kubelet[2620]: I1008 20:04:18.196889 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d0809075-7196-4067-94a1-21105b7b4f38-typha-certs\") pod \"calico-typha-c9f7956b8-7rcn2\" (UID: \"d0809075-7196-4067-94a1-21105b7b4f38\") " pod="calico-system/calico-typha-c9f7956b8-7rcn2" Oct 8 20:04:18.197167 kubelet[2620]: I1008 20:04:18.196931 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0809075-7196-4067-94a1-21105b7b4f38-tigera-ca-bundle\") pod \"calico-typha-c9f7956b8-7rcn2\" (UID: \"d0809075-7196-4067-94a1-21105b7b4f38\") " pod="calico-system/calico-typha-c9f7956b8-7rcn2" Oct 8 20:04:18.221899 kubelet[2620]: I1008 20:04:18.221819 2620 topology_manager.go:215] "Topology Admit Handler" podUID="5594373b-4d17-4895-b40b-15982d530c13" podNamespace="calico-system" podName="calico-node-zjhfs" Oct 8 20:04:18.236677 systemd[1]: Created slice kubepods-besteffort-pod5594373b_4d17_4895_b40b_15982d530c13.slice - libcontainer container kubepods-besteffort-pod5594373b_4d17_4895_b40b_15982d530c13.slice. Oct 8 20:04:18.297966 kubelet[2620]: I1008 20:04:18.297885 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-lib-modules\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.297966 kubelet[2620]: I1008 20:04:18.297958 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-cni-bin-dir\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298292 kubelet[2620]: I1008 20:04:18.297992 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-var-run-calico\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298292 kubelet[2620]: I1008 20:04:18.298072 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-xtables-lock\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298292 kubelet[2620]: I1008 20:04:18.298101 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-flexvol-driver-host\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298292 kubelet[2620]: I1008 20:04:18.298146 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-cni-log-dir\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298292 kubelet[2620]: I1008 20:04:18.298222 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-var-lib-calico\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298562 kubelet[2620]: I1008 20:04:18.298255 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-policysync\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298562 kubelet[2620]: I1008 20:04:18.298283 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5594373b-4d17-4895-b40b-15982d530c13-cni-net-dir\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298562 kubelet[2620]: I1008 20:04:18.298313 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5594373b-4d17-4895-b40b-15982d530c13-tigera-ca-bundle\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298562 kubelet[2620]: I1008 20:04:18.298345 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5594373b-4d17-4895-b40b-15982d530c13-node-certs\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.298562 kubelet[2620]: I1008 20:04:18.298377 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksblt\" (UniqueName: \"kubernetes.io/projected/5594373b-4d17-4895-b40b-15982d530c13-kube-api-access-ksblt\") pod \"calico-node-zjhfs\" (UID: \"5594373b-4d17-4895-b40b-15982d530c13\") " pod="calico-system/calico-node-zjhfs" Oct 8 20:04:18.368209 kubelet[2620]: I1008 20:04:18.367109 2620 topology_manager.go:215] "Topology Admit Handler" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" podNamespace="calico-system" podName="csi-node-driver-4tq78" Oct 8 20:04:18.371223 kubelet[2620]: E1008 20:04:18.370888 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tq78" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" Oct 8 20:04:18.399531 kubelet[2620]: I1008 20:04:18.398747 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwt57\" (UniqueName: \"kubernetes.io/projected/339880f2-88f3-4ae0-969e-5e762c1684c8-kube-api-access-bwt57\") pod \"csi-node-driver-4tq78\" (UID: \"339880f2-88f3-4ae0-969e-5e762c1684c8\") " pod="calico-system/csi-node-driver-4tq78" Oct 8 20:04:18.399531 kubelet[2620]: I1008 20:04:18.398840 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/339880f2-88f3-4ae0-969e-5e762c1684c8-registration-dir\") pod \"csi-node-driver-4tq78\" (UID: \"339880f2-88f3-4ae0-969e-5e762c1684c8\") " pod="calico-system/csi-node-driver-4tq78" Oct 8 20:04:18.399531 kubelet[2620]: I1008 20:04:18.398888 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/339880f2-88f3-4ae0-969e-5e762c1684c8-socket-dir\") pod \"csi-node-driver-4tq78\" (UID: \"339880f2-88f3-4ae0-969e-5e762c1684c8\") " pod="calico-system/csi-node-driver-4tq78" Oct 8 20:04:18.399531 kubelet[2620]: I1008 20:04:18.398963 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/339880f2-88f3-4ae0-969e-5e762c1684c8-kubelet-dir\") pod \"csi-node-driver-4tq78\" (UID: \"339880f2-88f3-4ae0-969e-5e762c1684c8\") " pod="calico-system/csi-node-driver-4tq78" Oct 8 20:04:18.400755 kubelet[2620]: E1008 20:04:18.400572 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.400755 kubelet[2620]: W1008 20:04:18.400617 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.400755 kubelet[2620]: E1008 20:04:18.400649 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.401663 kubelet[2620]: E1008 20:04:18.401523 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.401663 kubelet[2620]: W1008 20:04:18.401545 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.401663 kubelet[2620]: E1008 20:04:18.401575 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.402453 kubelet[2620]: E1008 20:04:18.402370 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.402453 kubelet[2620]: W1008 20:04:18.402393 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.402453 kubelet[2620]: E1008 20:04:18.402422 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.403335 kubelet[2620]: E1008 20:04:18.403311 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.403513 kubelet[2620]: W1008 20:04:18.403335 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.403513 kubelet[2620]: E1008 20:04:18.403424 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.404149 kubelet[2620]: E1008 20:04:18.404114 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.404149 kubelet[2620]: W1008 20:04:18.404143 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.404771 kubelet[2620]: E1008 20:04:18.404178 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.404771 kubelet[2620]: E1008 20:04:18.404581 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.404771 kubelet[2620]: W1008 20:04:18.404596 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.405096 kubelet[2620]: E1008 20:04:18.405041 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.405694 kubelet[2620]: E1008 20:04:18.405669 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.405694 kubelet[2620]: W1008 20:04:18.405693 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.405854 kubelet[2620]: E1008 20:04:18.405822 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.407228 kubelet[2620]: E1008 20:04:18.407201 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.407228 kubelet[2620]: W1008 20:04:18.407220 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.407456 kubelet[2620]: E1008 20:04:18.407367 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.407750 kubelet[2620]: E1008 20:04:18.407721 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.407750 kubelet[2620]: W1008 20:04:18.407741 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.407983 kubelet[2620]: E1008 20:04:18.407883 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.408341 kubelet[2620]: E1008 20:04:18.408318 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.408341 kubelet[2620]: W1008 20:04:18.408339 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.408703 kubelet[2620]: E1008 20:04:18.408654 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.409132 kubelet[2620]: E1008 20:04:18.409108 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.409132 kubelet[2620]: W1008 20:04:18.409131 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.409692 kubelet[2620]: E1008 20:04:18.409345 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.409797 kubelet[2620]: E1008 20:04:18.409737 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.409797 kubelet[2620]: W1008 20:04:18.409752 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.409910 kubelet[2620]: E1008 20:04:18.409871 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.410232 kubelet[2620]: E1008 20:04:18.410209 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.410323 kubelet[2620]: W1008 20:04:18.410248 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.410790 kubelet[2620]: E1008 20:04:18.410432 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.410790 kubelet[2620]: E1008 20:04:18.410771 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.410790 kubelet[2620]: W1008 20:04:18.410785 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.411020 kubelet[2620]: E1008 20:04:18.410952 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.411336 kubelet[2620]: E1008 20:04:18.411295 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.411336 kubelet[2620]: W1008 20:04:18.411312 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.412061 kubelet[2620]: E1008 20:04:18.411657 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.412061 kubelet[2620]: W1008 20:04:18.411677 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.412061 kubelet[2620]: E1008 20:04:18.411697 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.412061 kubelet[2620]: E1008 20:04:18.411737 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.412061 kubelet[2620]: I1008 20:04:18.411769 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/339880f2-88f3-4ae0-969e-5e762c1684c8-varrun\") pod \"csi-node-driver-4tq78\" (UID: \"339880f2-88f3-4ae0-969e-5e762c1684c8\") " pod="calico-system/csi-node-driver-4tq78" Oct 8 20:04:18.412359 kubelet[2620]: E1008 20:04:18.412157 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.412359 kubelet[2620]: W1008 20:04:18.412184 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.412492 kubelet[2620]: E1008 20:04:18.412417 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.413049 kubelet[2620]: E1008 20:04:18.412691 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.413049 kubelet[2620]: W1008 20:04:18.412731 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.413049 kubelet[2620]: E1008 20:04:18.412864 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.413865 kubelet[2620]: E1008 20:04:18.413098 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.413865 kubelet[2620]: W1008 20:04:18.413136 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.413865 kubelet[2620]: E1008 20:04:18.413344 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.413865 kubelet[2620]: E1008 20:04:18.413704 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.413865 kubelet[2620]: W1008 20:04:18.413719 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.413865 kubelet[2620]: E1008 20:04:18.413848 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.414367 kubelet[2620]: E1008 20:04:18.414346 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.414367 kubelet[2620]: W1008 20:04:18.414367 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.414566 kubelet[2620]: E1008 20:04:18.414514 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.415039 kubelet[2620]: E1008 20:04:18.414783 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.415039 kubelet[2620]: W1008 20:04:18.414799 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.415039 kubelet[2620]: E1008 20:04:18.414932 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.415382 kubelet[2620]: E1008 20:04:18.415256 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.415382 kubelet[2620]: W1008 20:04:18.415272 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.415533 kubelet[2620]: E1008 20:04:18.415490 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.419092 kubelet[2620]: E1008 20:04:18.415805 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.419092 kubelet[2620]: W1008 20:04:18.415823 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.419092 kubelet[2620]: E1008 20:04:18.416072 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.419092 kubelet[2620]: E1008 20:04:18.416234 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.419092 kubelet[2620]: W1008 20:04:18.416248 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.419092 kubelet[2620]: E1008 20:04:18.416565 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.419092 kubelet[2620]: W1008 20:04:18.416604 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.419092 kubelet[2620]: E1008 20:04:18.416952 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.419092 kubelet[2620]: W1008 20:04:18.417001 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.419092 kubelet[2620]: E1008 20:04:18.417035 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.419092 kubelet[2620]: E1008 20:04:18.417675 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.419675 kubelet[2620]: W1008 20:04:18.417691 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.419675 kubelet[2620]: E1008 20:04:18.417708 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.419675 kubelet[2620]: E1008 20:04:18.417739 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.419675 kubelet[2620]: E1008 20:04:18.417773 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.419675 kubelet[2620]: E1008 20:04:18.419112 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.419675 kubelet[2620]: W1008 20:04:18.419164 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.419675 kubelet[2620]: E1008 20:04:18.419184 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.424102 kubelet[2620]: E1008 20:04:18.423769 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.424102 kubelet[2620]: W1008 20:04:18.423790 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.424102 kubelet[2620]: E1008 20:04:18.423809 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.432042 containerd[1468]: time="2024-10-08T20:04:18.430230681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9f7956b8-7rcn2,Uid:d0809075-7196-4067-94a1-21105b7b4f38,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:18.464998 kubelet[2620]: E1008 20:04:18.464556 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.469240 kubelet[2620]: W1008 20:04:18.469146 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.469615 kubelet[2620]: E1008 20:04:18.469583 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.514110 kubelet[2620]: E1008 20:04:18.512773 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.514110 kubelet[2620]: W1008 20:04:18.512831 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.514110 kubelet[2620]: E1008 20:04:18.512924 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.514110 kubelet[2620]: E1008 20:04:18.513745 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.514110 kubelet[2620]: W1008 20:04:18.513765 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.514110 kubelet[2620]: E1008 20:04:18.513796 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.514751 kubelet[2620]: E1008 20:04:18.514601 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.514751 kubelet[2620]: W1008 20:04:18.514618 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.514751 kubelet[2620]: E1008 20:04:18.514674 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.516094 kubelet[2620]: E1008 20:04:18.515139 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.516094 kubelet[2620]: W1008 20:04:18.515172 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.516094 kubelet[2620]: E1008 20:04:18.515279 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.517758 kubelet[2620]: E1008 20:04:18.516339 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.517758 kubelet[2620]: W1008 20:04:18.517627 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.518663 kubelet[2620]: E1008 20:04:18.518597 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.519004 kubelet[2620]: E1008 20:04:18.518885 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.519004 kubelet[2620]: W1008 20:04:18.518904 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.521246 kubelet[2620]: E1008 20:04:18.519066 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.521246 kubelet[2620]: E1008 20:04:18.519335 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.521246 kubelet[2620]: W1008 20:04:18.519351 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.521246 kubelet[2620]: E1008 20:04:18.519626 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.521246 kubelet[2620]: E1008 20:04:18.520784 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.521246 kubelet[2620]: W1008 20:04:18.520800 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.521643 kubelet[2620]: E1008 20:04:18.521587 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.521773 kubelet[2620]: E1008 20:04:18.521753 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.521773 kubelet[2620]: W1008 20:04:18.521772 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.521909 kubelet[2620]: E1008 20:04:18.521826 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.525366 kubelet[2620]: E1008 20:04:18.525318 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.525366 kubelet[2620]: W1008 20:04:18.525364 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.525561 kubelet[2620]: E1008 20:04:18.525479 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.525891 kubelet[2620]: E1008 20:04:18.525791 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.525891 kubelet[2620]: W1008 20:04:18.525813 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.526504 kubelet[2620]: E1008 20:04:18.525941 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.526504 kubelet[2620]: E1008 20:04:18.526293 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.526504 kubelet[2620]: W1008 20:04:18.526308 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.526504 kubelet[2620]: E1008 20:04:18.526473 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.527342 kubelet[2620]: E1008 20:04:18.526769 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.527342 kubelet[2620]: W1008 20:04:18.526787 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.527342 kubelet[2620]: E1008 20:04:18.527071 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.529242 kubelet[2620]: E1008 20:04:18.527762 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.529242 kubelet[2620]: W1008 20:04:18.527779 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.529242 kubelet[2620]: E1008 20:04:18.527874 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.529477 kubelet[2620]: E1008 20:04:18.529315 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.529477 kubelet[2620]: W1008 20:04:18.529332 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.529477 kubelet[2620]: E1008 20:04:18.529421 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.529765 kubelet[2620]: E1008 20:04:18.529743 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.529765 kubelet[2620]: W1008 20:04:18.529764 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.530029 kubelet[2620]: E1008 20:04:18.529969 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.531114 kubelet[2620]: E1008 20:04:18.531087 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.531114 kubelet[2620]: W1008 20:04:18.531111 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.531344 kubelet[2620]: E1008 20:04:18.531249 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.531836 kubelet[2620]: E1008 20:04:18.531476 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.531836 kubelet[2620]: W1008 20:04:18.531491 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.531836 kubelet[2620]: E1008 20:04:18.531615 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.532036 kubelet[2620]: E1008 20:04:18.531895 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.532036 kubelet[2620]: W1008 20:04:18.531908 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.532942 kubelet[2620]: E1008 20:04:18.532894 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.533525 kubelet[2620]: E1008 20:04:18.533498 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.533525 kubelet[2620]: W1008 20:04:18.533521 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.533800 kubelet[2620]: E1008 20:04:18.533644 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.535048 kubelet[2620]: E1008 20:04:18.533884 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.535048 kubelet[2620]: W1008 20:04:18.533899 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.535048 kubelet[2620]: E1008 20:04:18.534204 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.536024 kubelet[2620]: E1008 20:04:18.535981 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.536152 kubelet[2620]: W1008 20:04:18.536001 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.536343 kubelet[2620]: E1008 20:04:18.536313 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.536963 kubelet[2620]: E1008 20:04:18.536437 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.536963 kubelet[2620]: W1008 20:04:18.536457 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.536963 kubelet[2620]: E1008 20:04:18.536559 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.537246 kubelet[2620]: E1008 20:04:18.537110 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.537246 kubelet[2620]: W1008 20:04:18.537135 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.537827 kubelet[2620]: E1008 20:04:18.537781 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.538023 containerd[1468]: time="2024-10-08T20:04:18.537379113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:18.538023 containerd[1468]: time="2024-10-08T20:04:18.537484851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:18.538023 containerd[1468]: time="2024-10-08T20:04:18.537513032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:18.538023 containerd[1468]: time="2024-10-08T20:04:18.537678835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:18.538291 kubelet[2620]: E1008 20:04:18.538242 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.538291 kubelet[2620]: W1008 20:04:18.538257 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.538291 kubelet[2620]: E1008 20:04:18.538275 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.545039 containerd[1468]: time="2024-10-08T20:04:18.544479119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zjhfs,Uid:5594373b-4d17-4895-b40b-15982d530c13,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:18.565833 kubelet[2620]: E1008 20:04:18.565780 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:04:18.567318 kubelet[2620]: W1008 20:04:18.566074 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:04:18.567318 kubelet[2620]: E1008 20:04:18.566272 2620 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:04:18.599761 systemd[1]: Started cri-containerd-cfb991cffc2a29b03c6f4558a39ff03fbcbfc99a0381d12bfc3c2120f8afbbbd.scope - libcontainer container cfb991cffc2a29b03c6f4558a39ff03fbcbfc99a0381d12bfc3c2120f8afbbbd. Oct 8 20:04:18.640286 containerd[1468]: time="2024-10-08T20:04:18.638695896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:18.640286 containerd[1468]: time="2024-10-08T20:04:18.638880679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:18.640286 containerd[1468]: time="2024-10-08T20:04:18.638949872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:18.640286 containerd[1468]: time="2024-10-08T20:04:18.639128876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:18.700409 systemd[1]: Started cri-containerd-46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5.scope - libcontainer container 46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5. Oct 8 20:04:18.788356 containerd[1468]: time="2024-10-08T20:04:18.788086842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zjhfs,Uid:5594373b-4d17-4895-b40b-15982d530c13,Namespace:calico-system,Attempt:0,} returns sandbox id \"46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5\"" Oct 8 20:04:18.792192 containerd[1468]: time="2024-10-08T20:04:18.791920336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:04:18.850305 containerd[1468]: time="2024-10-08T20:04:18.848970178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c9f7956b8-7rcn2,Uid:d0809075-7196-4067-94a1-21105b7b4f38,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfb991cffc2a29b03c6f4558a39ff03fbcbfc99a0381d12bfc3c2120f8afbbbd\"" Oct 8 20:04:19.915211 containerd[1468]: time="2024-10-08T20:04:19.915075922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:19.917696 containerd[1468]: time="2024-10-08T20:04:19.917470288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 8 20:04:19.921654 containerd[1468]: time="2024-10-08T20:04:19.920080493Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:19.926661 containerd[1468]: time="2024-10-08T20:04:19.926549555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:19.931084 containerd[1468]: time="2024-10-08T20:04:19.930988122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.138992641s" Oct 8 20:04:19.931974 containerd[1468]: time="2024-10-08T20:04:19.931940197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 8 20:04:19.935606 containerd[1468]: time="2024-10-08T20:04:19.935562247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:04:19.937935 containerd[1468]: time="2024-10-08T20:04:19.937886269Z" level=info msg="CreateContainer within sandbox \"46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:04:19.967274 containerd[1468]: time="2024-10-08T20:04:19.967196769Z" level=info msg="CreateContainer within sandbox \"46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b\"" Oct 8 20:04:19.970053 containerd[1468]: time="2024-10-08T20:04:19.969157886Z" level=info msg="StartContainer for \"9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b\"" Oct 8 20:04:20.055330 systemd[1]: Started cri-containerd-9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b.scope - libcontainer container 9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b. Oct 8 20:04:20.119075 containerd[1468]: time="2024-10-08T20:04:20.118978623Z" level=info msg="StartContainer for \"9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b\" returns successfully" Oct 8 20:04:20.163397 systemd[1]: cri-containerd-9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b.scope: Deactivated successfully. Oct 8 20:04:20.315299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b-rootfs.mount: Deactivated successfully. Oct 8 20:04:20.394573 kubelet[2620]: E1008 20:04:20.394504 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tq78" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" Oct 8 20:04:20.537219 containerd[1468]: time="2024-10-08T20:04:20.536907300Z" level=info msg="shim disconnected" id=9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b namespace=k8s.io Oct 8 20:04:20.540199 containerd[1468]: time="2024-10-08T20:04:20.537371139Z" level=warning msg="cleaning up after shim disconnected" id=9a6354e91ce8e99833606bf53be2cb6a346e899a71211213fee7fe4f4c22ab9b namespace=k8s.io Oct 8 20:04:20.540199 containerd[1468]: time="2024-10-08T20:04:20.537394504Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:22.195826 containerd[1468]: time="2024-10-08T20:04:22.195754131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:22.198809 containerd[1468]: time="2024-10-08T20:04:22.198720272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 8 20:04:22.200910 containerd[1468]: time="2024-10-08T20:04:22.200768635Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:22.207053 containerd[1468]: time="2024-10-08T20:04:22.206980947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:22.208117 containerd[1468]: time="2024-10-08T20:04:22.208068904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.272449999s" Oct 8 20:04:22.209051 containerd[1468]: time="2024-10-08T20:04:22.208118512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 8 20:04:22.211937 containerd[1468]: time="2024-10-08T20:04:22.211861996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:04:22.233391 containerd[1468]: time="2024-10-08T20:04:22.233331371Z" level=info msg="CreateContainer within sandbox \"cfb991cffc2a29b03c6f4558a39ff03fbcbfc99a0381d12bfc3c2120f8afbbbd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:04:22.263708 containerd[1468]: time="2024-10-08T20:04:22.263594722Z" level=info msg="CreateContainer within sandbox \"cfb991cffc2a29b03c6f4558a39ff03fbcbfc99a0381d12bfc3c2120f8afbbbd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"244a961d84961cf4ff8b57496161ce7dead5192f021ed2641ccac2f8f780f12a\"" Oct 8 20:04:22.266370 containerd[1468]: time="2024-10-08T20:04:22.266324499Z" level=info msg="StartContainer for \"244a961d84961cf4ff8b57496161ce7dead5192f021ed2641ccac2f8f780f12a\"" Oct 8 20:04:22.360388 systemd[1]: Started cri-containerd-244a961d84961cf4ff8b57496161ce7dead5192f021ed2641ccac2f8f780f12a.scope - libcontainer container 244a961d84961cf4ff8b57496161ce7dead5192f021ed2641ccac2f8f780f12a. Oct 8 20:04:22.392926 kubelet[2620]: E1008 20:04:22.392616 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tq78" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" Oct 8 20:04:22.469263 containerd[1468]: time="2024-10-08T20:04:22.468586558Z" level=info msg="StartContainer for \"244a961d84961cf4ff8b57496161ce7dead5192f021ed2641ccac2f8f780f12a\" returns successfully" Oct 8 20:04:23.226928 systemd[1]: run-containerd-runc-k8s.io-244a961d84961cf4ff8b57496161ce7dead5192f021ed2641ccac2f8f780f12a-runc.d9LAgb.mount: Deactivated successfully. Oct 8 20:04:23.552113 kubelet[2620]: I1008 20:04:23.552067 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:04:24.393078 kubelet[2620]: E1008 20:04:24.392986 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tq78" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" Oct 8 20:04:26.381979 containerd[1468]: time="2024-10-08T20:04:26.381905488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:26.383481 containerd[1468]: time="2024-10-08T20:04:26.383375953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 8 20:04:26.384944 containerd[1468]: time="2024-10-08T20:04:26.384867483Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:26.388488 containerd[1468]: time="2024-10-08T20:04:26.388362372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:26.390553 containerd[1468]: time="2024-10-08T20:04:26.389823316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.177899565s" Oct 8 20:04:26.390553 containerd[1468]: time="2024-10-08T20:04:26.389874938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 8 20:04:26.395291 kubelet[2620]: E1008 20:04:26.395246 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4tq78" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" Oct 8 20:04:26.400438 containerd[1468]: time="2024-10-08T20:04:26.397574933Z" level=info msg="CreateContainer within sandbox \"46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:04:26.420333 containerd[1468]: time="2024-10-08T20:04:26.420249639Z" level=info msg="CreateContainer within sandbox \"46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a\"" Oct 8 20:04:26.421388 containerd[1468]: time="2024-10-08T20:04:26.421350064Z" level=info msg="StartContainer for \"0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a\"" Oct 8 20:04:26.481320 systemd[1]: Started cri-containerd-0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a.scope - libcontainer container 0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a. Oct 8 20:04:26.529287 containerd[1468]: time="2024-10-08T20:04:26.529233855Z" level=info msg="StartContainer for \"0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a\" returns successfully" Oct 8 20:04:26.604841 kubelet[2620]: I1008 20:04:26.604717 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c9f7956b8-7rcn2" podStartSLOduration=5.248927757 podStartE2EDuration="8.604687602s" podCreationTimestamp="2024-10-08 20:04:18 +0000 UTC" firstStartedPulling="2024-10-08 20:04:18.854799641 +0000 UTC m=+22.684848219" lastFinishedPulling="2024-10-08 20:04:22.210559481 +0000 UTC m=+26.040608064" observedRunningTime="2024-10-08 20:04:22.605500902 +0000 UTC m=+26.435549494" watchObservedRunningTime="2024-10-08 20:04:26.604687602 +0000 UTC m=+30.434736230" Oct 8 20:04:27.807056 systemd[1]: cri-containerd-0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a.scope: Deactivated successfully. Oct 8 20:04:27.857066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a-rootfs.mount: Deactivated successfully. Oct 8 20:04:27.865894 kubelet[2620]: I1008 20:04:27.865228 2620 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:04:27.907507 kubelet[2620]: I1008 20:04:27.907414 2620 topology_manager.go:215] "Topology Admit Handler" podUID="e9a17030-3a1d-4d27-94e8-64bccf5f8ba4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j86sj" Oct 8 20:04:27.912855 kubelet[2620]: I1008 20:04:27.912529 2620 topology_manager.go:215] "Topology Admit Handler" podUID="bc1181be-ac87-4d2f-a808-b62c5fc38f5a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g7x9j" Oct 8 20:04:27.916788 kubelet[2620]: I1008 20:04:27.915771 2620 topology_manager.go:215] "Topology Admit Handler" podUID="7d345d2a-35fa-4b82-8298-e61701951a29" podNamespace="calico-system" podName="calico-kube-controllers-6dd5967bb4-bpclm" Oct 8 20:04:27.932754 systemd[1]: Created slice kubepods-burstable-pode9a17030_3a1d_4d27_94e8_64bccf5f8ba4.slice - libcontainer container kubepods-burstable-pode9a17030_3a1d_4d27_94e8_64bccf5f8ba4.slice. Oct 8 20:04:27.948160 systemd[1]: Created slice kubepods-burstable-podbc1181be_ac87_4d2f_a808_b62c5fc38f5a.slice - libcontainer container kubepods-burstable-podbc1181be_ac87_4d2f_a808_b62c5fc38f5a.slice. Oct 8 20:04:27.966299 systemd[1]: Created slice kubepods-besteffort-pod7d345d2a_35fa_4b82_8298_e61701951a29.slice - libcontainer container kubepods-besteffort-pod7d345d2a_35fa_4b82_8298_e61701951a29.slice. Oct 8 20:04:27.998389 kubelet[2620]: I1008 20:04:27.998264 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6lrf\" (UniqueName: \"kubernetes.io/projected/e9a17030-3a1d-4d27-94e8-64bccf5f8ba4-kube-api-access-p6lrf\") pod \"coredns-7db6d8ff4d-j86sj\" (UID: \"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4\") " pod="kube-system/coredns-7db6d8ff4d-j86sj" Oct 8 20:04:27.998389 kubelet[2620]: I1008 20:04:27.998353 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d345d2a-35fa-4b82-8298-e61701951a29-tigera-ca-bundle\") pod \"calico-kube-controllers-6dd5967bb4-bpclm\" (UID: \"7d345d2a-35fa-4b82-8298-e61701951a29\") " pod="calico-system/calico-kube-controllers-6dd5967bb4-bpclm" Oct 8 20:04:27.998389 kubelet[2620]: I1008 20:04:27.998399 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bc1181be-ac87-4d2f-a808-b62c5fc38f5a-config-volume\") pod \"coredns-7db6d8ff4d-g7x9j\" (UID: \"bc1181be-ac87-4d2f-a808-b62c5fc38f5a\") " pod="kube-system/coredns-7db6d8ff4d-g7x9j" Oct 8 20:04:27.998389 kubelet[2620]: I1008 20:04:27.998433 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpv4z\" (UniqueName: \"kubernetes.io/projected/bc1181be-ac87-4d2f-a808-b62c5fc38f5a-kube-api-access-lpv4z\") pod \"coredns-7db6d8ff4d-g7x9j\" (UID: \"bc1181be-ac87-4d2f-a808-b62c5fc38f5a\") " pod="kube-system/coredns-7db6d8ff4d-g7x9j" Oct 8 20:04:27.998893 kubelet[2620]: I1008 20:04:27.998487 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9a17030-3a1d-4d27-94e8-64bccf5f8ba4-config-volume\") pod \"coredns-7db6d8ff4d-j86sj\" (UID: \"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4\") " pod="kube-system/coredns-7db6d8ff4d-j86sj" Oct 8 20:04:27.998893 kubelet[2620]: I1008 20:04:27.998519 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sffrq\" (UniqueName: \"kubernetes.io/projected/7d345d2a-35fa-4b82-8298-e61701951a29-kube-api-access-sffrq\") pod \"calico-kube-controllers-6dd5967bb4-bpclm\" (UID: \"7d345d2a-35fa-4b82-8298-e61701951a29\") " pod="calico-system/calico-kube-controllers-6dd5967bb4-bpclm" Oct 8 20:04:28.244513 containerd[1468]: time="2024-10-08T20:04:28.244296859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j86sj,Uid:e9a17030-3a1d-4d27-94e8-64bccf5f8ba4,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:28.258629 containerd[1468]: time="2024-10-08T20:04:28.257961773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g7x9j,Uid:bc1181be-ac87-4d2f-a808-b62c5fc38f5a,Namespace:kube-system,Attempt:0,}" Oct 8 20:04:28.276316 containerd[1468]: time="2024-10-08T20:04:28.276249893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd5967bb4-bpclm,Uid:7d345d2a-35fa-4b82-8298-e61701951a29,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:28.402300 systemd[1]: Created slice kubepods-besteffort-pod339880f2_88f3_4ae0_969e_5e762c1684c8.slice - libcontainer container kubepods-besteffort-pod339880f2_88f3_4ae0_969e_5e762c1684c8.slice. Oct 8 20:04:28.406539 containerd[1468]: time="2024-10-08T20:04:28.406173624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tq78,Uid:339880f2-88f3-4ae0-969e-5e762c1684c8,Namespace:calico-system,Attempt:0,}" Oct 8 20:04:28.872181 containerd[1468]: time="2024-10-08T20:04:28.872073473Z" level=info msg="shim disconnected" id=0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a namespace=k8s.io Oct 8 20:04:28.872181 containerd[1468]: time="2024-10-08T20:04:28.872145269Z" level=warning msg="cleaning up after shim disconnected" id=0861044133d25622786e7a336c780e64137fdfeb23c6f6506d5b33481c7c1f5a namespace=k8s.io Oct 8 20:04:28.872181 containerd[1468]: time="2024-10-08T20:04:28.872164543Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:04:29.112842 containerd[1468]: time="2024-10-08T20:04:29.112775759Z" level=error msg="Failed to destroy network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.114416 containerd[1468]: time="2024-10-08T20:04:29.113943465Z" level=error msg="encountered an error cleaning up failed sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.114548 containerd[1468]: time="2024-10-08T20:04:29.114468185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd5967bb4-bpclm,Uid:7d345d2a-35fa-4b82-8298-e61701951a29,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.115032 kubelet[2620]: E1008 20:04:29.114818 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.115032 kubelet[2620]: E1008 20:04:29.114928 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dd5967bb4-bpclm" Oct 8 20:04:29.115032 kubelet[2620]: E1008 20:04:29.114966 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6dd5967bb4-bpclm" Oct 8 20:04:29.116109 kubelet[2620]: E1008 20:04:29.115113 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6dd5967bb4-bpclm_calico-system(7d345d2a-35fa-4b82-8298-e61701951a29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6dd5967bb4-bpclm_calico-system(7d345d2a-35fa-4b82-8298-e61701951a29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dd5967bb4-bpclm" podUID="7d345d2a-35fa-4b82-8298-e61701951a29" Oct 8 20:04:29.156130 containerd[1468]: time="2024-10-08T20:04:29.154509890Z" level=error msg="Failed to destroy network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.156130 containerd[1468]: time="2024-10-08T20:04:29.155084181Z" level=error msg="encountered an error cleaning up failed sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.156130 containerd[1468]: time="2024-10-08T20:04:29.155205236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tq78,Uid:339880f2-88f3-4ae0-969e-5e762c1684c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.156420 kubelet[2620]: E1008 20:04:29.155517 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.156420 kubelet[2620]: E1008 20:04:29.155598 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4tq78" Oct 8 20:04:29.156420 kubelet[2620]: E1008 20:04:29.155633 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4tq78" Oct 8 20:04:29.156624 kubelet[2620]: E1008 20:04:29.155698 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4tq78_calico-system(339880f2-88f3-4ae0-969e-5e762c1684c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4tq78_calico-system(339880f2-88f3-4ae0-969e-5e762c1684c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4tq78" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" Oct 8 20:04:29.162403 containerd[1468]: time="2024-10-08T20:04:29.162345678Z" level=error msg="Failed to destroy network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.163945 containerd[1468]: time="2024-10-08T20:04:29.163001063Z" level=error msg="encountered an error cleaning up failed sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.163945 containerd[1468]: time="2024-10-08T20:04:29.163106768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j86sj,Uid:e9a17030-3a1d-4d27-94e8-64bccf5f8ba4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.164846 kubelet[2620]: E1008 20:04:29.163415 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.164846 kubelet[2620]: E1008 20:04:29.163487 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j86sj" Oct 8 20:04:29.164846 kubelet[2620]: E1008 20:04:29.163516 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j86sj" Oct 8 20:04:29.165067 kubelet[2620]: E1008 20:04:29.163574 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j86sj_kube-system(e9a17030-3a1d-4d27-94e8-64bccf5f8ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j86sj_kube-system(e9a17030-3a1d-4d27-94e8-64bccf5f8ba4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j86sj" podUID="e9a17030-3a1d-4d27-94e8-64bccf5f8ba4" Oct 8 20:04:29.167460 containerd[1468]: time="2024-10-08T20:04:29.167361651Z" level=error msg="Failed to destroy network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.168233 containerd[1468]: time="2024-10-08T20:04:29.167795573Z" level=error msg="encountered an error cleaning up failed sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.168233 containerd[1468]: time="2024-10-08T20:04:29.167871440Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g7x9j,Uid:bc1181be-ac87-4d2f-a808-b62c5fc38f5a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.168412 kubelet[2620]: E1008 20:04:29.168251 2620 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.168412 kubelet[2620]: E1008 20:04:29.168319 2620 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g7x9j" Oct 8 20:04:29.168412 kubelet[2620]: E1008 20:04:29.168354 2620 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g7x9j" Oct 8 20:04:29.168749 kubelet[2620]: E1008 20:04:29.168500 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-g7x9j_kube-system(bc1181be-ac87-4d2f-a808-b62c5fc38f5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-g7x9j_kube-system(bc1181be-ac87-4d2f-a808-b62c5fc38f5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-g7x9j" podUID="bc1181be-ac87-4d2f-a808-b62c5fc38f5a" Oct 8 20:04:29.573445 kubelet[2620]: I1008 20:04:29.573405 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:29.575967 containerd[1468]: time="2024-10-08T20:04:29.574319876Z" level=info msg="StopPodSandbox for \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\"" Oct 8 20:04:29.575967 containerd[1468]: time="2024-10-08T20:04:29.575511507Z" level=info msg="Ensure that sandbox 49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05 in task-service has been cleanup successfully" Oct 8 20:04:29.580720 kubelet[2620]: I1008 20:04:29.579821 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:29.581886 containerd[1468]: time="2024-10-08T20:04:29.581813976Z" level=info msg="StopPodSandbox for \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\"" Oct 8 20:04:29.582532 containerd[1468]: time="2024-10-08T20:04:29.582500955Z" level=info msg="Ensure that sandbox 006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc in task-service has been cleanup successfully" Oct 8 20:04:29.590273 kubelet[2620]: I1008 20:04:29.589997 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:29.600320 containerd[1468]: time="2024-10-08T20:04:29.600261070Z" level=info msg="StopPodSandbox for \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\"" Oct 8 20:04:29.601958 containerd[1468]: time="2024-10-08T20:04:29.601529815Z" level=info msg="Ensure that sandbox b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814 in task-service has been cleanup successfully" Oct 8 20:04:29.620328 containerd[1468]: time="2024-10-08T20:04:29.620268784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:04:29.627254 kubelet[2620]: I1008 20:04:29.627213 2620 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:29.632046 containerd[1468]: time="2024-10-08T20:04:29.629923029Z" level=info msg="StopPodSandbox for \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\"" Oct 8 20:04:29.632423 containerd[1468]: time="2024-10-08T20:04:29.632381399Z" level=info msg="Ensure that sandbox 01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66 in task-service has been cleanup successfully" Oct 8 20:04:29.692237 containerd[1468]: time="2024-10-08T20:04:29.692051242Z" level=error msg="StopPodSandbox for \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\" failed" error="failed to destroy network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.693267 kubelet[2620]: E1008 20:04:29.692984 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:29.693425 kubelet[2620]: E1008 20:04:29.693328 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05"} Oct 8 20:04:29.693517 kubelet[2620]: E1008 20:04:29.693470 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"339880f2-88f3-4ae0-969e-5e762c1684c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:29.693648 kubelet[2620]: E1008 20:04:29.693555 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"339880f2-88f3-4ae0-969e-5e762c1684c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4tq78" podUID="339880f2-88f3-4ae0-969e-5e762c1684c8" Oct 8 20:04:29.736581 containerd[1468]: time="2024-10-08T20:04:29.736312792Z" level=error msg="StopPodSandbox for \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\" failed" error="failed to destroy network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.737406 containerd[1468]: time="2024-10-08T20:04:29.737120483Z" level=error msg="StopPodSandbox for \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\" failed" error="failed to destroy network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.738881 kubelet[2620]: E1008 20:04:29.738827 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:29.739066 kubelet[2620]: E1008 20:04:29.738915 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc"} Oct 8 20:04:29.739066 kubelet[2620]: E1008 20:04:29.738998 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc1181be-ac87-4d2f-a808-b62c5fc38f5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:29.739248 kubelet[2620]: E1008 20:04:29.739061 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc1181be-ac87-4d2f-a808-b62c5fc38f5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-g7x9j" podUID="bc1181be-ac87-4d2f-a808-b62c5fc38f5a" Oct 8 20:04:29.740930 kubelet[2620]: E1008 20:04:29.740880 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:29.741099 kubelet[2620]: E1008 20:04:29.741066 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814"} Oct 8 20:04:29.741277 kubelet[2620]: E1008 20:04:29.741247 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:29.742595 kubelet[2620]: E1008 20:04:29.742547 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j86sj" podUID="e9a17030-3a1d-4d27-94e8-64bccf5f8ba4" Oct 8 20:04:29.752364 containerd[1468]: time="2024-10-08T20:04:29.752288204Z" level=error msg="StopPodSandbox for \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\" failed" error="failed to destroy network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:04:29.752860 kubelet[2620]: E1008 20:04:29.752793 2620 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:29.753143 kubelet[2620]: E1008 20:04:29.752877 2620 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66"} Oct 8 20:04:29.753143 kubelet[2620]: E1008 20:04:29.752932 2620 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d345d2a-35fa-4b82-8298-e61701951a29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:04:29.753143 kubelet[2620]: E1008 20:04:29.753058 2620 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d345d2a-35fa-4b82-8298-e61701951a29\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6dd5967bb4-bpclm" podUID="7d345d2a-35fa-4b82-8298-e61701951a29" Oct 8 20:04:29.910828 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05-shm.mount: Deactivated successfully. Oct 8 20:04:29.911371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc-shm.mount: Deactivated successfully. Oct 8 20:04:29.911736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814-shm.mount: Deactivated successfully. Oct 8 20:04:29.912253 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66-shm.mount: Deactivated successfully. Oct 8 20:04:33.597893 kubelet[2620]: I1008 20:04:33.597813 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:04:36.073275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192474570.mount: Deactivated successfully. Oct 8 20:04:36.107292 containerd[1468]: time="2024-10-08T20:04:36.107202253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:36.108902 containerd[1468]: time="2024-10-08T20:04:36.108812804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 8 20:04:36.110681 containerd[1468]: time="2024-10-08T20:04:36.110602061Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:36.115938 containerd[1468]: time="2024-10-08T20:04:36.114997959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 6.494671069s" Oct 8 20:04:36.115938 containerd[1468]: time="2024-10-08T20:04:36.115072583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 8 20:04:36.115938 containerd[1468]: time="2024-10-08T20:04:36.115761338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:36.132514 containerd[1468]: time="2024-10-08T20:04:36.132454734Z" level=info msg="CreateContainer within sandbox \"46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:04:36.164690 containerd[1468]: time="2024-10-08T20:04:36.164606446Z" level=info msg="CreateContainer within sandbox \"46b5244d805ea1fdd40b86edc1b998235d239fe73c87c77bfed3909afe696ff5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c633fff77f7c2520af9af0d84ae3dca185921f5f73e474984ddb76e104053ba0\"" Oct 8 20:04:36.168155 containerd[1468]: time="2024-10-08T20:04:36.165931299Z" level=info msg="StartContainer for \"c633fff77f7c2520af9af0d84ae3dca185921f5f73e474984ddb76e104053ba0\"" Oct 8 20:04:36.217296 systemd[1]: Started cri-containerd-c633fff77f7c2520af9af0d84ae3dca185921f5f73e474984ddb76e104053ba0.scope - libcontainer container c633fff77f7c2520af9af0d84ae3dca185921f5f73e474984ddb76e104053ba0. Oct 8 20:04:36.271278 containerd[1468]: time="2024-10-08T20:04:36.271201521Z" level=info msg="StartContainer for \"c633fff77f7c2520af9af0d84ae3dca185921f5f73e474984ddb76e104053ba0\" returns successfully" Oct 8 20:04:36.400623 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:04:36.400835 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:04:36.712273 kubelet[2620]: I1008 20:04:36.711730 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zjhfs" podStartSLOduration=1.3854108 podStartE2EDuration="18.711691959s" podCreationTimestamp="2024-10-08 20:04:18 +0000 UTC" firstStartedPulling="2024-10-08 20:04:18.791049197 +0000 UTC m=+22.621097776" lastFinishedPulling="2024-10-08 20:04:36.11733035 +0000 UTC m=+39.947378935" observedRunningTime="2024-10-08 20:04:36.706996498 +0000 UTC m=+40.537045090" watchObservedRunningTime="2024-10-08 20:04:36.711691959 +0000 UTC m=+40.541740546" Oct 8 20:04:38.468055 kernel: bpftool[3787]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:04:38.767242 systemd-networkd[1372]: vxlan.calico: Link UP Oct 8 20:04:38.767256 systemd-networkd[1372]: vxlan.calico: Gained carrier Oct 8 20:04:40.002589 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Oct 8 20:04:42.371649 ntpd[1436]: Listen normally on 7 vxlan.calico 192.168.8.192:123 Oct 8 20:04:42.371996 ntpd[1436]: Listen normally on 8 vxlan.calico [fe80::6414:adff:fea8:98ad%4]:123 Oct 8 20:04:42.372483 ntpd[1436]: 8 Oct 20:04:42 ntpd[1436]: Listen normally on 7 vxlan.calico 192.168.8.192:123 Oct 8 20:04:42.372483 ntpd[1436]: 8 Oct 20:04:42 ntpd[1436]: Listen normally on 8 vxlan.calico [fe80::6414:adff:fea8:98ad%4]:123 Oct 8 20:04:42.393905 containerd[1468]: time="2024-10-08T20:04:42.393432002Z" level=info msg="StopPodSandbox for \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\"" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.454 [INFO][3873] k8s.go 608: Cleaning up netns ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.454 [INFO][3873] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" iface="eth0" netns="/var/run/netns/cni-0851abfe-6a61-afd3-159a-905ea7a2d3dc" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.455 [INFO][3873] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" iface="eth0" netns="/var/run/netns/cni-0851abfe-6a61-afd3-159a-905ea7a2d3dc" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.456 [INFO][3873] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" iface="eth0" netns="/var/run/netns/cni-0851abfe-6a61-afd3-159a-905ea7a2d3dc" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.456 [INFO][3873] k8s.go 615: Releasing IP address(es) ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.456 [INFO][3873] utils.go 188: Calico CNI releasing IP address ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.485 [INFO][3879] ipam_plugin.go 417: Releasing address using handleID ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.485 [INFO][3879] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.485 [INFO][3879] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.492 [WARNING][3879] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.492 [INFO][3879] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.494 [INFO][3879] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:42.498446 containerd[1468]: 2024-10-08 20:04:42.496 [INFO][3873] k8s.go 621: Teardown processing complete. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:42.499691 containerd[1468]: time="2024-10-08T20:04:42.499369084Z" level=info msg="TearDown network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\" successfully" Oct 8 20:04:42.499691 containerd[1468]: time="2024-10-08T20:04:42.499414836Z" level=info msg="StopPodSandbox for \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\" returns successfully" Oct 8 20:04:42.503114 containerd[1468]: time="2024-10-08T20:04:42.501475045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j86sj,Uid:e9a17030-3a1d-4d27-94e8-64bccf5f8ba4,Namespace:kube-system,Attempt:1,}" Oct 8 20:04:42.505843 systemd[1]: run-netns-cni\x2d0851abfe\x2d6a61\x2dafd3\x2d159a\x2d905ea7a2d3dc.mount: Deactivated successfully. Oct 8 20:04:42.662330 systemd-networkd[1372]: caliddf8e829326: Link UP Oct 8 20:04:42.662676 systemd-networkd[1372]: caliddf8e829326: Gained carrier Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.570 [INFO][3886] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0 coredns-7db6d8ff4d- kube-system e9a17030-3a1d-4d27-94e8-64bccf5f8ba4 681 0 2024-10-08 20:04:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal coredns-7db6d8ff4d-j86sj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliddf8e829326 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.571 [INFO][3886] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.610 [INFO][3897] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" HandleID="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.622 [INFO][3897] ipam_plugin.go 270: Auto assigning IP ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" HandleID="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-j86sj", "timestamp":"2024-10-08 20:04:42.610587407 +0000 UTC"}, Hostname:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.622 [INFO][3897] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.622 [INFO][3897] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.622 [INFO][3897] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal' Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.624 [INFO][3897] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.629 [INFO][3897] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.634 [INFO][3897] ipam.go 489: Trying affinity for 192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.636 [INFO][3897] ipam.go 155: Attempting to load block cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.638 [INFO][3897] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.638 [INFO][3897] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.192/26 handle="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.640 [INFO][3897] ipam.go 1685: Creating new handle: k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.645 [INFO][3897] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.192/26 handle="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.653 [INFO][3897] ipam.go 1216: Successfully claimed IPs: [192.168.8.193/26] block=192.168.8.192/26 handle="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.653 [INFO][3897] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.193/26] handle="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.653 [INFO][3897] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:42.689642 containerd[1468]: 2024-10-08 20:04:42.653 [INFO][3897] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.193/26] IPv6=[] ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" HandleID="k8s-pod-network.fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.691999 containerd[1468]: 2024-10-08 20:04:42.656 [INFO][3886] k8s.go 386: Populated endpoint ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-j86sj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf8e829326", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:42.691999 containerd[1468]: 2024-10-08 20:04:42.656 [INFO][3886] k8s.go 387: Calico CNI using IPs: [192.168.8.193/32] ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.691999 containerd[1468]: 2024-10-08 20:04:42.656 [INFO][3886] dataplane_linux.go 68: Setting the host side veth name to caliddf8e829326 ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.691999 containerd[1468]: 2024-10-08 20:04:42.663 [INFO][3886] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.691999 containerd[1468]: 2024-10-08 20:04:42.666 [INFO][3886] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd", Pod:"coredns-7db6d8ff4d-j86sj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf8e829326", MAC:"16:7f:c5:76:60:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:42.691999 containerd[1468]: 2024-10-08 20:04:42.683 [INFO][3886] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j86sj" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:42.732535 containerd[1468]: time="2024-10-08T20:04:42.731900241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:42.732535 containerd[1468]: time="2024-10-08T20:04:42.732078048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:42.732535 containerd[1468]: time="2024-10-08T20:04:42.732217920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:42.732535 containerd[1468]: time="2024-10-08T20:04:42.732391921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:42.776291 systemd[1]: Started cri-containerd-fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd.scope - libcontainer container fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd. Oct 8 20:04:42.836651 containerd[1468]: time="2024-10-08T20:04:42.836497528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j86sj,Uid:e9a17030-3a1d-4d27-94e8-64bccf5f8ba4,Namespace:kube-system,Attempt:1,} returns sandbox id \"fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd\"" Oct 8 20:04:42.844065 containerd[1468]: time="2024-10-08T20:04:42.843640433Z" level=info msg="CreateContainer within sandbox \"fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:04:42.873056 containerd[1468]: time="2024-10-08T20:04:42.872947046Z" level=info msg="CreateContainer within sandbox \"fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"656ab52f20f9658cac05f5413e27047e5088b5c205c68cd057eaca4c6ef9da5e\"" Oct 8 20:04:42.874844 containerd[1468]: time="2024-10-08T20:04:42.873883162Z" level=info msg="StartContainer for \"656ab52f20f9658cac05f5413e27047e5088b5c205c68cd057eaca4c6ef9da5e\"" Oct 8 20:04:42.913295 systemd[1]: Started cri-containerd-656ab52f20f9658cac05f5413e27047e5088b5c205c68cd057eaca4c6ef9da5e.scope - libcontainer container 656ab52f20f9658cac05f5413e27047e5088b5c205c68cd057eaca4c6ef9da5e. Oct 8 20:04:42.954934 containerd[1468]: time="2024-10-08T20:04:42.954344390Z" level=info msg="StartContainer for \"656ab52f20f9658cac05f5413e27047e5088b5c205c68cd057eaca4c6ef9da5e\" returns successfully" Oct 8 20:04:43.394198 containerd[1468]: time="2024-10-08T20:04:43.393611169Z" level=info msg="StopPodSandbox for \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\"" Oct 8 20:04:43.394894 containerd[1468]: time="2024-10-08T20:04:43.394749892Z" level=info msg="StopPodSandbox for \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\"" Oct 8 20:04:43.399070 containerd[1468]: time="2024-10-08T20:04:43.397538989Z" level=info msg="StopPodSandbox for \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\"" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.547 [INFO][4041] k8s.go 608: Cleaning up netns ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.547 [INFO][4041] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" iface="eth0" netns="/var/run/netns/cni-8f435e89-3006-765f-c3a3-0c562919285b" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.548 [INFO][4041] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" iface="eth0" netns="/var/run/netns/cni-8f435e89-3006-765f-c3a3-0c562919285b" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.551 [INFO][4041] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" iface="eth0" netns="/var/run/netns/cni-8f435e89-3006-765f-c3a3-0c562919285b" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.551 [INFO][4041] k8s.go 615: Releasing IP address(es) ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.551 [INFO][4041] utils.go 188: Calico CNI releasing IP address ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.633 [INFO][4057] ipam_plugin.go 417: Releasing address using handleID ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.634 [INFO][4057] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.634 [INFO][4057] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.645 [WARNING][4057] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.645 [INFO][4057] ipam_plugin.go 445: Releasing address using workloadID ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.649 [INFO][4057] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:43.660667 containerd[1468]: 2024-10-08 20:04:43.656 [INFO][4041] k8s.go 621: Teardown processing complete. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:43.666234 containerd[1468]: time="2024-10-08T20:04:43.666165122Z" level=info msg="TearDown network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\" successfully" Oct 8 20:04:43.666485 containerd[1468]: time="2024-10-08T20:04:43.666452487Z" level=info msg="StopPodSandbox for \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\" returns successfully" Oct 8 20:04:43.671340 containerd[1468]: time="2024-10-08T20:04:43.670378947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tq78,Uid:339880f2-88f3-4ae0-969e-5e762c1684c8,Namespace:calico-system,Attempt:1,}" Oct 8 20:04:43.673005 systemd[1]: run-netns-cni\x2d8f435e89\x2d3006\x2d765f\x2dc3a3\x2d0c562919285b.mount: Deactivated successfully. Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.551 [INFO][4031] k8s.go 608: Cleaning up netns ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.551 [INFO][4031] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" iface="eth0" netns="/var/run/netns/cni-e127b4c6-94f0-9665-e4cd-60a19264d954" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.552 [INFO][4031] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" iface="eth0" netns="/var/run/netns/cni-e127b4c6-94f0-9665-e4cd-60a19264d954" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.556 [INFO][4031] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" iface="eth0" netns="/var/run/netns/cni-e127b4c6-94f0-9665-e4cd-60a19264d954" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.556 [INFO][4031] k8s.go 615: Releasing IP address(es) ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.556 [INFO][4031] utils.go 188: Calico CNI releasing IP address ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.634 [INFO][4058] ipam_plugin.go 417: Releasing address using handleID ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.636 [INFO][4058] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.649 [INFO][4058] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.675 [WARNING][4058] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.675 [INFO][4058] ipam_plugin.go 445: Releasing address using workloadID ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.681 [INFO][4058] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:43.693974 containerd[1468]: 2024-10-08 20:04:43.684 [INFO][4031] k8s.go 621: Teardown processing complete. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:43.697850 containerd[1468]: time="2024-10-08T20:04:43.695360381Z" level=info msg="TearDown network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\" successfully" Oct 8 20:04:43.697850 containerd[1468]: time="2024-10-08T20:04:43.697173575Z" level=info msg="StopPodSandbox for \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\" returns successfully" Oct 8 20:04:43.702765 systemd[1]: run-netns-cni\x2de127b4c6\x2d94f0\x2d9665\x2de4cd\x2d60a19264d954.mount: Deactivated successfully. Oct 8 20:04:43.712800 containerd[1468]: time="2024-10-08T20:04:43.712380672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g7x9j,Uid:bc1181be-ac87-4d2f-a808-b62c5fc38f5a,Namespace:kube-system,Attempt:1,}" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.580 [INFO][4042] k8s.go 608: Cleaning up netns ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.582 [INFO][4042] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" iface="eth0" netns="/var/run/netns/cni-16aa0636-d74a-b4f8-d88e-97e816be7b39" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.584 [INFO][4042] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" iface="eth0" netns="/var/run/netns/cni-16aa0636-d74a-b4f8-d88e-97e816be7b39" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.585 [INFO][4042] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" iface="eth0" netns="/var/run/netns/cni-16aa0636-d74a-b4f8-d88e-97e816be7b39" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.585 [INFO][4042] k8s.go 615: Releasing IP address(es) ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.586 [INFO][4042] utils.go 188: Calico CNI releasing IP address ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.680 [INFO][4065] ipam_plugin.go 417: Releasing address using handleID ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.680 [INFO][4065] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.681 [INFO][4065] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.701 [WARNING][4065] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.701 [INFO][4065] ipam_plugin.go 445: Releasing address using workloadID ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.707 [INFO][4065] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:43.714378 containerd[1468]: 2024-10-08 20:04:43.710 [INFO][4042] k8s.go 621: Teardown processing complete. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:43.718077 containerd[1468]: time="2024-10-08T20:04:43.714525849Z" level=info msg="TearDown network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\" successfully" Oct 8 20:04:43.718077 containerd[1468]: time="2024-10-08T20:04:43.714594190Z" level=info msg="StopPodSandbox for \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\" returns successfully" Oct 8 20:04:43.722899 systemd[1]: run-netns-cni\x2d16aa0636\x2dd74a\x2db4f8\x2dd88e\x2d97e816be7b39.mount: Deactivated successfully. Oct 8 20:04:43.726055 containerd[1468]: time="2024-10-08T20:04:43.725980987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd5967bb4-bpclm,Uid:7d345d2a-35fa-4b82-8298-e61701951a29,Namespace:calico-system,Attempt:1,}" Oct 8 20:04:43.763501 kubelet[2620]: I1008 20:04:43.763394 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j86sj" podStartSLOduration=32.763163995 podStartE2EDuration="32.763163995s" podCreationTimestamp="2024-10-08 20:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:43.762498261 +0000 UTC m=+47.592546851" watchObservedRunningTime="2024-10-08 20:04:43.763163995 +0000 UTC m=+47.593212589" Oct 8 20:04:44.159129 systemd-networkd[1372]: cali67021dacb74: Link UP Oct 8 20:04:44.159548 systemd-networkd[1372]: cali67021dacb74: Gained carrier Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:43.946 [INFO][4077] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0 csi-node-driver- calico-system 339880f2-88f3-4ae0-969e-5e762c1684c8 693 0 2024-10-08 20:04:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal csi-node-driver-4tq78 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali67021dacb74 [] []}} ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:43.947 [INFO][4077] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.054 [INFO][4117] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" HandleID="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.072 [INFO][4117] ipam_plugin.go 270: Auto assigning IP ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" HandleID="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050680), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", "pod":"csi-node-driver-4tq78", "timestamp":"2024-10-08 20:04:44.054875053 +0000 UTC"}, Hostname:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.072 [INFO][4117] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.073 [INFO][4117] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.073 [INFO][4117] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal' Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.080 [INFO][4117] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.096 [INFO][4117] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.110 [INFO][4117] ipam.go 489: Trying affinity for 192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.114 [INFO][4117] ipam.go 155: Attempting to load block cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.121 [INFO][4117] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.121 [INFO][4117] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.192/26 handle="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.124 [INFO][4117] ipam.go 1685: Creating new handle: k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8 Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.131 [INFO][4117] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.192/26 handle="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.139 [INFO][4117] ipam.go 1216: Successfully claimed IPs: [192.168.8.194/26] block=192.168.8.192/26 handle="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.140 [INFO][4117] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.194/26] handle="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.140 [INFO][4117] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:44.195311 containerd[1468]: 2024-10-08 20:04:44.140 [INFO][4117] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.194/26] IPv6=[] ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" HandleID="k8s-pod-network.b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:44.196536 containerd[1468]: 2024-10-08 20:04:44.146 [INFO][4077] k8s.go 386: Populated endpoint ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"339880f2-88f3-4ae0-969e-5e762c1684c8", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-4tq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali67021dacb74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:44.196536 containerd[1468]: 2024-10-08 20:04:44.147 [INFO][4077] k8s.go 387: Calico CNI using IPs: [192.168.8.194/32] ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:44.196536 containerd[1468]: 2024-10-08 20:04:44.147 [INFO][4077] dataplane_linux.go 68: Setting the host side veth name to cali67021dacb74 ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:44.196536 containerd[1468]: 2024-10-08 20:04:44.158 [INFO][4077] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:44.196536 containerd[1468]: 2024-10-08 20:04:44.163 [INFO][4077] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"339880f2-88f3-4ae0-969e-5e762c1684c8", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8", Pod:"csi-node-driver-4tq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali67021dacb74", MAC:"f6:fb:8a:b0:34:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:44.196536 containerd[1468]: 2024-10-08 20:04:44.191 [INFO][4077] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8" Namespace="calico-system" Pod="csi-node-driver-4tq78" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:44.264061 systemd-networkd[1372]: califd89da254e9: Link UP Oct 8 20:04:44.269026 systemd-networkd[1372]: califd89da254e9: Gained carrier Oct 8 20:04:44.296580 containerd[1468]: time="2024-10-08T20:04:44.296389911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:44.298151 containerd[1468]: time="2024-10-08T20:04:44.298067047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:44.298354 containerd[1468]: time="2024-10-08T20:04:44.298123887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:44.298354 containerd[1468]: time="2024-10-08T20:04:44.298324365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:43.964 [INFO][4088] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0 calico-kube-controllers-6dd5967bb4- calico-system 7d345d2a-35fa-4b82-8298-e61701951a29 695 0 2024-10-08 20:04:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6dd5967bb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal calico-kube-controllers-6dd5967bb4-bpclm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califd89da254e9 [] []}} ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:43.965 [INFO][4088] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.100 [INFO][4122] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" HandleID="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.119 [INFO][4122] ipam_plugin.go 270: Auto assigning IP ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" HandleID="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7b00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6dd5967bb4-bpclm", "timestamp":"2024-10-08 20:04:44.100496217 +0000 UTC"}, Hostname:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.119 [INFO][4122] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.141 [INFO][4122] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.141 [INFO][4122] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal' Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.146 [INFO][4122] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.163 [INFO][4122] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.182 [INFO][4122] ipam.go 489: Trying affinity for 192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.189 [INFO][4122] ipam.go 155: Attempting to load block cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.200 [INFO][4122] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.200 [INFO][4122] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.192/26 handle="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.204 [INFO][4122] ipam.go 1685: Creating new handle: k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.213 [INFO][4122] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.192/26 handle="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.231 [INFO][4122] ipam.go 1216: Successfully claimed IPs: [192.168.8.195/26] block=192.168.8.192/26 handle="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.231 [INFO][4122] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.195/26] handle="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.231 [INFO][4122] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:44.320349 containerd[1468]: 2024-10-08 20:04:44.231 [INFO][4122] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.195/26] IPv6=[] ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" HandleID="k8s-pod-network.25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:44.323925 containerd[1468]: 2024-10-08 20:04:44.245 [INFO][4088] k8s.go 386: Populated endpoint ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0", GenerateName:"calico-kube-controllers-6dd5967bb4-", Namespace:"calico-system", SelfLink:"", UID:"7d345d2a-35fa-4b82-8298-e61701951a29", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd5967bb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6dd5967bb4-bpclm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd89da254e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:44.323925 containerd[1468]: 2024-10-08 20:04:44.247 [INFO][4088] k8s.go 387: Calico CNI using IPs: [192.168.8.195/32] ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:44.323925 containerd[1468]: 2024-10-08 20:04:44.249 [INFO][4088] dataplane_linux.go 68: Setting the host side veth name to califd89da254e9 ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:44.323925 containerd[1468]: 2024-10-08 20:04:44.275 [INFO][4088] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:44.323925 containerd[1468]: 2024-10-08 20:04:44.283 [INFO][4088] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0", GenerateName:"calico-kube-controllers-6dd5967bb4-", Namespace:"calico-system", SelfLink:"", UID:"7d345d2a-35fa-4b82-8298-e61701951a29", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd5967bb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe", Pod:"calico-kube-controllers-6dd5967bb4-bpclm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd89da254e9", MAC:"8a:e2:f8:f4:18:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:44.323925 containerd[1468]: 2024-10-08 20:04:44.314 [INFO][4088] k8s.go 500: Wrote updated endpoint to datastore ContainerID="25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe" Namespace="calico-system" Pod="calico-kube-controllers-6dd5967bb4-bpclm" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:44.359362 systemd-networkd[1372]: caliddf8e829326: Gained IPv6LL Oct 8 20:04:44.362407 systemd[1]: Started cri-containerd-b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8.scope - libcontainer container b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8. Oct 8 20:04:44.376585 systemd-networkd[1372]: calib585b99e978: Link UP Oct 8 20:04:44.379794 systemd-networkd[1372]: calib585b99e978: Gained carrier Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:43.984 [INFO][4085] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0 coredns-7db6d8ff4d- kube-system bc1181be-ac87-4d2f-a808-b62c5fc38f5a 694 0 2024-10-08 20:04:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal coredns-7db6d8ff4d-g7x9j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib585b99e978 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:43.985 [INFO][4085] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.098 [INFO][4123] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" HandleID="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.119 [INFO][4123] ipam_plugin.go 270: Auto assigning IP ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" HandleID="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319340), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-g7x9j", "timestamp":"2024-10-08 20:04:44.098284798 +0000 UTC"}, Hostname:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.120 [INFO][4123] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.232 [INFO][4123] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.234 [INFO][4123] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal' Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.242 [INFO][4123] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.256 [INFO][4123] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.281 [INFO][4123] ipam.go 489: Trying affinity for 192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.291 [INFO][4123] ipam.go 155: Attempting to load block cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.303 [INFO][4123] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.306 [INFO][4123] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.192/26 handle="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.315 [INFO][4123] ipam.go 1685: Creating new handle: k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.329 [INFO][4123] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.192/26 handle="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.348 [INFO][4123] ipam.go 1216: Successfully claimed IPs: [192.168.8.196/26] block=192.168.8.192/26 handle="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.348 [INFO][4123] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.196/26] handle="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.348 [INFO][4123] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:44.440056 containerd[1468]: 2024-10-08 20:04:44.348 [INFO][4123] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.196/26] IPv6=[] ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" HandleID="k8s-pod-network.464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:44.442774 containerd[1468]: 2024-10-08 20:04:44.366 [INFO][4085] k8s.go 386: Populated endpoint ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc1181be-ac87-4d2f-a808-b62c5fc38f5a", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-g7x9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib585b99e978", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:44.442774 containerd[1468]: 2024-10-08 20:04:44.367 [INFO][4085] k8s.go 387: Calico CNI using IPs: [192.168.8.196/32] ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:44.442774 containerd[1468]: 2024-10-08 20:04:44.368 [INFO][4085] dataplane_linux.go 68: Setting the host side veth name to calib585b99e978 ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:44.442774 containerd[1468]: 2024-10-08 20:04:44.384 [INFO][4085] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:44.442774 containerd[1468]: 2024-10-08 20:04:44.389 [INFO][4085] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc1181be-ac87-4d2f-a808-b62c5fc38f5a", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa", Pod:"coredns-7db6d8ff4d-g7x9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib585b99e978", MAC:"aa:fe:b9:16:be:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:44.442774 containerd[1468]: 2024-10-08 20:04:44.433 [INFO][4085] k8s.go 500: Wrote updated endpoint to datastore ContainerID="464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g7x9j" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:44.473829 containerd[1468]: time="2024-10-08T20:04:44.473159563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:44.473829 containerd[1468]: time="2024-10-08T20:04:44.473270836Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:44.473829 containerd[1468]: time="2024-10-08T20:04:44.473299113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:44.476217 containerd[1468]: time="2024-10-08T20:04:44.473905186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:44.534160 containerd[1468]: time="2024-10-08T20:04:44.531468652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:44.534160 containerd[1468]: time="2024-10-08T20:04:44.531569270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:44.534160 containerd[1468]: time="2024-10-08T20:04:44.531603350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:44.534160 containerd[1468]: time="2024-10-08T20:04:44.531787893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:44.585784 containerd[1468]: time="2024-10-08T20:04:44.585626059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4tq78,Uid:339880f2-88f3-4ae0-969e-5e762c1684c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8\"" Oct 8 20:04:44.596291 systemd[1]: Started cri-containerd-25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe.scope - libcontainer container 25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe. Oct 8 20:04:44.610124 containerd[1468]: time="2024-10-08T20:04:44.607965310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:04:44.612286 systemd[1]: Started cri-containerd-464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa.scope - libcontainer container 464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa. Oct 8 20:04:44.716182 containerd[1468]: time="2024-10-08T20:04:44.715589725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g7x9j,Uid:bc1181be-ac87-4d2f-a808-b62c5fc38f5a,Namespace:kube-system,Attempt:1,} returns sandbox id \"464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa\"" Oct 8 20:04:44.726748 containerd[1468]: time="2024-10-08T20:04:44.726684666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6dd5967bb4-bpclm,Uid:7d345d2a-35fa-4b82-8298-e61701951a29,Namespace:calico-system,Attempt:1,} returns sandbox id \"25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe\"" Oct 8 20:04:44.728191 containerd[1468]: time="2024-10-08T20:04:44.726812098Z" level=info msg="CreateContainer within sandbox \"464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:04:44.758285 containerd[1468]: time="2024-10-08T20:04:44.758220836Z" level=info msg="CreateContainer within sandbox \"464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"782ef8313a72bac97287572200aac263c258642f9815413805dccc0e775d501b\"" Oct 8 20:04:44.760041 containerd[1468]: time="2024-10-08T20:04:44.758928335Z" level=info msg="StartContainer for \"782ef8313a72bac97287572200aac263c258642f9815413805dccc0e775d501b\"" Oct 8 20:04:44.803285 systemd[1]: Started cri-containerd-782ef8313a72bac97287572200aac263c258642f9815413805dccc0e775d501b.scope - libcontainer container 782ef8313a72bac97287572200aac263c258642f9815413805dccc0e775d501b. Oct 8 20:04:44.850636 containerd[1468]: time="2024-10-08T20:04:44.850171166Z" level=info msg="StartContainer for \"782ef8313a72bac97287572200aac263c258642f9815413805dccc0e775d501b\" returns successfully" Oct 8 20:04:45.314990 systemd-networkd[1372]: califd89da254e9: Gained IPv6LL Oct 8 20:04:45.571003 systemd-networkd[1372]: calib585b99e978: Gained IPv6LL Oct 8 20:04:45.656520 containerd[1468]: time="2024-10-08T20:04:45.656404866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:45.658135 containerd[1468]: time="2024-10-08T20:04:45.658039679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 8 20:04:45.659456 containerd[1468]: time="2024-10-08T20:04:45.659383868Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:45.663476 containerd[1468]: time="2024-10-08T20:04:45.662998828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:45.664215 containerd[1468]: time="2024-10-08T20:04:45.664167639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.056127649s" Oct 8 20:04:45.664349 containerd[1468]: time="2024-10-08T20:04:45.664223138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 8 20:04:45.666196 containerd[1468]: time="2024-10-08T20:04:45.666167347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:04:45.668678 containerd[1468]: time="2024-10-08T20:04:45.668328845Z" level=info msg="CreateContainer within sandbox \"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:04:45.692370 containerd[1468]: time="2024-10-08T20:04:45.692314617Z" level=info msg="CreateContainer within sandbox \"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3f8372f6b650dc9bcd3d02a798529cbb10fccac5365bb25463154fdf7226b60d\"" Oct 8 20:04:45.694074 containerd[1468]: time="2024-10-08T20:04:45.693826875Z" level=info msg="StartContainer for \"3f8372f6b650dc9bcd3d02a798529cbb10fccac5365bb25463154fdf7226b60d\"" Oct 8 20:04:45.753494 systemd[1]: Started cri-containerd-3f8372f6b650dc9bcd3d02a798529cbb10fccac5365bb25463154fdf7226b60d.scope - libcontainer container 3f8372f6b650dc9bcd3d02a798529cbb10fccac5365bb25463154fdf7226b60d. Oct 8 20:04:45.810575 kubelet[2620]: I1008 20:04:45.810447 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g7x9j" podStartSLOduration=34.81041311 podStartE2EDuration="34.81041311s" podCreationTimestamp="2024-10-08 20:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:04:45.78008885 +0000 UTC m=+49.610137441" watchObservedRunningTime="2024-10-08 20:04:45.81041311 +0000 UTC m=+49.640461702" Oct 8 20:04:45.836706 containerd[1468]: time="2024-10-08T20:04:45.836532304Z" level=info msg="StartContainer for \"3f8372f6b650dc9bcd3d02a798529cbb10fccac5365bb25463154fdf7226b60d\" returns successfully" Oct 8 20:04:46.146555 systemd-networkd[1372]: cali67021dacb74: Gained IPv6LL Oct 8 20:04:47.691762 containerd[1468]: time="2024-10-08T20:04:47.691690675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:47.693181 containerd[1468]: time="2024-10-08T20:04:47.693115355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 8 20:04:47.694964 containerd[1468]: time="2024-10-08T20:04:47.694920376Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:47.698043 containerd[1468]: time="2024-10-08T20:04:47.697983856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:47.699365 containerd[1468]: time="2024-10-08T20:04:47.699151339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 2.032417928s" Oct 8 20:04:47.699365 containerd[1468]: time="2024-10-08T20:04:47.699204475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 8 20:04:47.701043 containerd[1468]: time="2024-10-08T20:04:47.700738118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:04:47.723465 containerd[1468]: time="2024-10-08T20:04:47.723384927Z" level=info msg="CreateContainer within sandbox \"25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:04:47.755683 containerd[1468]: time="2024-10-08T20:04:47.755625498Z" level=info msg="CreateContainer within sandbox \"25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6d0c2b01b76f752b6306bf4f341b6e92f32d1050c1875397083ab1c210322379\"" Oct 8 20:04:47.758058 containerd[1468]: time="2024-10-08T20:04:47.756511860Z" level=info msg="StartContainer for \"6d0c2b01b76f752b6306bf4f341b6e92f32d1050c1875397083ab1c210322379\"" Oct 8 20:04:47.812293 systemd[1]: Started cri-containerd-6d0c2b01b76f752b6306bf4f341b6e92f32d1050c1875397083ab1c210322379.scope - libcontainer container 6d0c2b01b76f752b6306bf4f341b6e92f32d1050c1875397083ab1c210322379. Oct 8 20:04:47.883520 containerd[1468]: time="2024-10-08T20:04:47.883460103Z" level=info msg="StartContainer for \"6d0c2b01b76f752b6306bf4f341b6e92f32d1050c1875397083ab1c210322379\" returns successfully" Oct 8 20:04:48.372143 ntpd[1436]: Listen normally on 9 caliddf8e829326 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 8 20:04:48.372568 ntpd[1436]: Listen normally on 10 cali67021dacb74 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 8 20:04:48.373796 ntpd[1436]: 8 Oct 20:04:48 ntpd[1436]: Listen normally on 9 caliddf8e829326 [fe80::ecee:eeff:feee:eeee%7]:123 Oct 8 20:04:48.373796 ntpd[1436]: 8 Oct 20:04:48 ntpd[1436]: Listen normally on 10 cali67021dacb74 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 8 20:04:48.373796 ntpd[1436]: 8 Oct 20:04:48 ntpd[1436]: Listen normally on 11 califd89da254e9 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 20:04:48.373796 ntpd[1436]: 8 Oct 20:04:48 ntpd[1436]: Listen normally on 12 calib585b99e978 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 20:04:48.373147 ntpd[1436]: Listen normally on 11 califd89da254e9 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 20:04:48.373294 ntpd[1436]: Listen normally on 12 calib585b99e978 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 20:04:48.834664 kubelet[2620]: I1008 20:04:48.832403 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6dd5967bb4-bpclm" podStartSLOduration=27.867657836 podStartE2EDuration="30.832365954s" podCreationTimestamp="2024-10-08 20:04:18 +0000 UTC" firstStartedPulling="2024-10-08 20:04:44.735862303 +0000 UTC m=+48.565910880" lastFinishedPulling="2024-10-08 20:04:47.700570423 +0000 UTC m=+51.530618998" observedRunningTime="2024-10-08 20:04:48.827424434 +0000 UTC m=+52.657473078" watchObservedRunningTime="2024-10-08 20:04:48.832365954 +0000 UTC m=+52.662414546" Oct 8 20:04:49.422386 containerd[1468]: time="2024-10-08T20:04:49.422315880Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:49.426754 containerd[1468]: time="2024-10-08T20:04:49.426634423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 8 20:04:49.430073 containerd[1468]: time="2024-10-08T20:04:49.428579953Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:49.432966 containerd[1468]: time="2024-10-08T20:04:49.432914523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:49.435859 containerd[1468]: time="2024-10-08T20:04:49.435799547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.735015078s" Oct 8 20:04:49.436044 containerd[1468]: time="2024-10-08T20:04:49.435869546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 8 20:04:49.440543 containerd[1468]: time="2024-10-08T20:04:49.440506426Z" level=info msg="CreateContainer within sandbox \"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:04:49.465124 containerd[1468]: time="2024-10-08T20:04:49.465037885Z" level=info msg="CreateContainer within sandbox \"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fd2e690d55b22acd32edc77ae372b5ad997acef0f3a0bdd0986787a06213e6e8\"" Oct 8 20:04:49.467276 containerd[1468]: time="2024-10-08T20:04:49.467215815Z" level=info msg="StartContainer for \"fd2e690d55b22acd32edc77ae372b5ad997acef0f3a0bdd0986787a06213e6e8\"" Oct 8 20:04:49.569600 systemd[1]: Started cri-containerd-fd2e690d55b22acd32edc77ae372b5ad997acef0f3a0bdd0986787a06213e6e8.scope - libcontainer container fd2e690d55b22acd32edc77ae372b5ad997acef0f3a0bdd0986787a06213e6e8. Oct 8 20:04:49.639878 containerd[1468]: time="2024-10-08T20:04:49.639809490Z" level=info msg="StartContainer for \"fd2e690d55b22acd32edc77ae372b5ad997acef0f3a0bdd0986787a06213e6e8\" returns successfully" Oct 8 20:04:50.030363 kubelet[2620]: I1008 20:04:50.029952 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4tq78" podStartSLOduration=27.199676573 podStartE2EDuration="32.029915385s" podCreationTimestamp="2024-10-08 20:04:18 +0000 UTC" firstStartedPulling="2024-10-08 20:04:44.607169128 +0000 UTC m=+48.437217709" lastFinishedPulling="2024-10-08 20:04:49.437407943 +0000 UTC m=+53.267456521" observedRunningTime="2024-10-08 20:04:49.861370967 +0000 UTC m=+53.691419559" watchObservedRunningTime="2024-10-08 20:04:50.029915385 +0000 UTC m=+53.859963976" Oct 8 20:04:50.560137 kubelet[2620]: I1008 20:04:50.560085 2620 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:04:50.560137 kubelet[2620]: I1008 20:04:50.560147 2620 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:04:53.103576 kubelet[2620]: I1008 20:04:53.102256 2620 topology_manager.go:215] "Topology Admit Handler" podUID="d94586f1-ae4a-44a1-b9c7-232aa0fefc3e" podNamespace="calico-apiserver" podName="calico-apiserver-555d85cd96-284s2" Oct 8 20:04:53.126192 systemd[1]: Created slice kubepods-besteffort-podd94586f1_ae4a_44a1_b9c7_232aa0fefc3e.slice - libcontainer container kubepods-besteffort-podd94586f1_ae4a_44a1_b9c7_232aa0fefc3e.slice. Oct 8 20:04:53.144110 kubelet[2620]: I1008 20:04:53.144036 2620 topology_manager.go:215] "Topology Admit Handler" podUID="44c12864-2e30-4929-b46a-e1595adedd4b" podNamespace="calico-apiserver" podName="calico-apiserver-555d85cd96-j7nqh" Oct 8 20:04:53.161564 systemd[1]: Created slice kubepods-besteffort-pod44c12864_2e30_4929_b46a_e1595adedd4b.slice - libcontainer container kubepods-besteffort-pod44c12864_2e30_4929_b46a_e1595adedd4b.slice. Oct 8 20:04:53.219552 kubelet[2620]: I1008 20:04:53.219207 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j52hn\" (UniqueName: \"kubernetes.io/projected/44c12864-2e30-4929-b46a-e1595adedd4b-kube-api-access-j52hn\") pod \"calico-apiserver-555d85cd96-j7nqh\" (UID: \"44c12864-2e30-4929-b46a-e1595adedd4b\") " pod="calico-apiserver/calico-apiserver-555d85cd96-j7nqh" Oct 8 20:04:53.219552 kubelet[2620]: I1008 20:04:53.219290 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d94586f1-ae4a-44a1-b9c7-232aa0fefc3e-calico-apiserver-certs\") pod \"calico-apiserver-555d85cd96-284s2\" (UID: \"d94586f1-ae4a-44a1-b9c7-232aa0fefc3e\") " pod="calico-apiserver/calico-apiserver-555d85cd96-284s2" Oct 8 20:04:53.219552 kubelet[2620]: I1008 20:04:53.219328 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/44c12864-2e30-4929-b46a-e1595adedd4b-calico-apiserver-certs\") pod \"calico-apiserver-555d85cd96-j7nqh\" (UID: \"44c12864-2e30-4929-b46a-e1595adedd4b\") " pod="calico-apiserver/calico-apiserver-555d85cd96-j7nqh" Oct 8 20:04:53.219552 kubelet[2620]: I1008 20:04:53.219373 2620 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6cp8\" (UniqueName: \"kubernetes.io/projected/d94586f1-ae4a-44a1-b9c7-232aa0fefc3e-kube-api-access-z6cp8\") pod \"calico-apiserver-555d85cd96-284s2\" (UID: \"d94586f1-ae4a-44a1-b9c7-232aa0fefc3e\") " pod="calico-apiserver/calico-apiserver-555d85cd96-284s2" Oct 8 20:04:53.324049 kubelet[2620]: E1008 20:04:53.320917 2620 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:04:53.324049 kubelet[2620]: E1008 20:04:53.321057 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d94586f1-ae4a-44a1-b9c7-232aa0fefc3e-calico-apiserver-certs podName:d94586f1-ae4a-44a1-b9c7-232aa0fefc3e nodeName:}" failed. No retries permitted until 2024-10-08 20:04:53.821023657 +0000 UTC m=+57.651072242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d94586f1-ae4a-44a1-b9c7-232aa0fefc3e-calico-apiserver-certs") pod "calico-apiserver-555d85cd96-284s2" (UID: "d94586f1-ae4a-44a1-b9c7-232aa0fefc3e") : secret "calico-apiserver-certs" not found Oct 8 20:04:53.324049 kubelet[2620]: E1008 20:04:53.321404 2620 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:04:53.324049 kubelet[2620]: E1008 20:04:53.321462 2620 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44c12864-2e30-4929-b46a-e1595adedd4b-calico-apiserver-certs podName:44c12864-2e30-4929-b46a-e1595adedd4b nodeName:}" failed. No retries permitted until 2024-10-08 20:04:53.821444931 +0000 UTC m=+57.651493507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/44c12864-2e30-4929-b46a-e1595adedd4b-calico-apiserver-certs") pod "calico-apiserver-555d85cd96-j7nqh" (UID: "44c12864-2e30-4929-b46a-e1595adedd4b") : secret "calico-apiserver-certs" not found Oct 8 20:04:54.033296 containerd[1468]: time="2024-10-08T20:04:54.033228792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555d85cd96-284s2,Uid:d94586f1-ae4a-44a1-b9c7-232aa0fefc3e,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:04:54.070299 containerd[1468]: time="2024-10-08T20:04:54.069502266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555d85cd96-j7nqh,Uid:44c12864-2e30-4929-b46a-e1595adedd4b,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:04:54.361763 systemd-networkd[1372]: cali988f26e6bd3: Link UP Oct 8 20:04:54.362215 systemd-networkd[1372]: cali988f26e6bd3: Gained carrier Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.170 [INFO][4535] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0 calico-apiserver-555d85cd96- calico-apiserver d94586f1-ae4a-44a1-b9c7-232aa0fefc3e 809 0 2024-10-08 20:04:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:555d85cd96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal calico-apiserver-555d85cd96-284s2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali988f26e6bd3 [] []}} ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.171 [INFO][4535] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.265 [INFO][4557] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" HandleID="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.304 [INFO][4557] ipam_plugin.go 270: Auto assigning IP ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" HandleID="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", "pod":"calico-apiserver-555d85cd96-284s2", "timestamp":"2024-10-08 20:04:54.265617182 +0000 UTC"}, Hostname:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.305 [INFO][4557] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.305 [INFO][4557] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.305 [INFO][4557] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal' Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.307 [INFO][4557] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.313 [INFO][4557] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.319 [INFO][4557] ipam.go 489: Trying affinity for 192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.322 [INFO][4557] ipam.go 155: Attempting to load block cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.326 [INFO][4557] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.326 [INFO][4557] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.192/26 handle="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.328 [INFO][4557] ipam.go 1685: Creating new handle: k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.334 [INFO][4557] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.192/26 handle="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.346 [INFO][4557] ipam.go 1216: Successfully claimed IPs: [192.168.8.197/26] block=192.168.8.192/26 handle="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.346 [INFO][4557] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.197/26] handle="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.346 [INFO][4557] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:54.407703 containerd[1468]: 2024-10-08 20:04:54.346 [INFO][4557] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.197/26] IPv6=[] ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" HandleID="k8s-pod-network.3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" Oct 8 20:04:54.413999 containerd[1468]: 2024-10-08 20:04:54.350 [INFO][4535] k8s.go 386: Populated endpoint ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0", GenerateName:"calico-apiserver-555d85cd96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94586f1-ae4a-44a1-b9c7-232aa0fefc3e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555d85cd96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-555d85cd96-284s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali988f26e6bd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:54.413999 containerd[1468]: 2024-10-08 20:04:54.351 [INFO][4535] k8s.go 387: Calico CNI using IPs: [192.168.8.197/32] ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" Oct 8 20:04:54.413999 containerd[1468]: 2024-10-08 20:04:54.351 [INFO][4535] dataplane_linux.go 68: Setting the host side veth name to cali988f26e6bd3 ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" Oct 8 20:04:54.413999 containerd[1468]: 2024-10-08 20:04:54.361 [INFO][4535] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" Oct 8 20:04:54.413999 containerd[1468]: 2024-10-08 20:04:54.364 [INFO][4535] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0", GenerateName:"calico-apiserver-555d85cd96-", Namespace:"calico-apiserver", SelfLink:"", UID:"d94586f1-ae4a-44a1-b9c7-232aa0fefc3e", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555d85cd96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa", Pod:"calico-apiserver-555d85cd96-284s2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali988f26e6bd3", MAC:"9a:a7:0f:d5:2d:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:54.413999 containerd[1468]: 2024-10-08 20:04:54.395 [INFO][4535] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-284s2" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--284s2-eth0" Oct 8 20:04:54.485502 systemd-networkd[1372]: cali495911a06c5: Link UP Oct 8 20:04:54.489286 systemd-networkd[1372]: cali495911a06c5: Gained carrier Oct 8 20:04:54.494198 containerd[1468]: time="2024-10-08T20:04:54.490674639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:54.494198 containerd[1468]: time="2024-10-08T20:04:54.490778880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:54.494198 containerd[1468]: time="2024-10-08T20:04:54.490806868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:54.494198 containerd[1468]: time="2024-10-08T20:04:54.490944986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.216 [INFO][4546] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0 calico-apiserver-555d85cd96- calico-apiserver 44c12864-2e30-4929-b46a-e1595adedd4b 812 0 2024-10-08 20:04:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:555d85cd96 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal calico-apiserver-555d85cd96-j7nqh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali495911a06c5 [] []}} ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.217 [INFO][4546] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.292 [INFO][4563] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" HandleID="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.308 [INFO][4563] ipam_plugin.go 270: Auto assigning IP ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" HandleID="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e62d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", "pod":"calico-apiserver-555d85cd96-j7nqh", "timestamp":"2024-10-08 20:04:54.292711062 +0000 UTC"}, Hostname:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.308 [INFO][4563] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.347 [INFO][4563] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.347 [INFO][4563] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal' Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.350 [INFO][4563] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.369 [INFO][4563] ipam.go 372: Looking up existing affinities for host host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.387 [INFO][4563] ipam.go 489: Trying affinity for 192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.413 [INFO][4563] ipam.go 155: Attempting to load block cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.425 [INFO][4563] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.8.192/26 host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.427 [INFO][4563] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.8.192/26 handle="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.434 [INFO][4563] ipam.go 1685: Creating new handle: k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.457 [INFO][4563] ipam.go 1203: Writing block in order to claim IPs block=192.168.8.192/26 handle="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.474 [INFO][4563] ipam.go 1216: Successfully claimed IPs: [192.168.8.198/26] block=192.168.8.192/26 handle="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.475 [INFO][4563] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.8.198/26] handle="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" host="ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal" Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.476 [INFO][4563] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:54.572653 containerd[1468]: 2024-10-08 20:04:54.476 [INFO][4563] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.8.198/26] IPv6=[] ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" HandleID="k8s-pod-network.f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" Oct 8 20:04:54.580588 containerd[1468]: 2024-10-08 20:04:54.480 [INFO][4546] k8s.go 386: Populated endpoint ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0", GenerateName:"calico-apiserver-555d85cd96-", Namespace:"calico-apiserver", SelfLink:"", UID:"44c12864-2e30-4929-b46a-e1595adedd4b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555d85cd96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-555d85cd96-j7nqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali495911a06c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:54.580588 containerd[1468]: 2024-10-08 20:04:54.481 [INFO][4546] k8s.go 387: Calico CNI using IPs: [192.168.8.198/32] ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" Oct 8 20:04:54.580588 containerd[1468]: 2024-10-08 20:04:54.481 [INFO][4546] dataplane_linux.go 68: Setting the host side veth name to cali495911a06c5 ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" Oct 8 20:04:54.580588 containerd[1468]: 2024-10-08 20:04:54.484 [INFO][4546] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" Oct 8 20:04:54.580588 containerd[1468]: 2024-10-08 20:04:54.485 [INFO][4546] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0", GenerateName:"calico-apiserver-555d85cd96-", Namespace:"calico-apiserver", SelfLink:"", UID:"44c12864-2e30-4929-b46a-e1595adedd4b", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"555d85cd96", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c", Pod:"calico-apiserver-555d85cd96-j7nqh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.8.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali495911a06c5", MAC:"fa:16:45:f8:49:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:54.580588 containerd[1468]: 2024-10-08 20:04:54.533 [INFO][4546] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c" Namespace="calico-apiserver" Pod="calico-apiserver-555d85cd96-j7nqh" WorkloadEndpoint="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--apiserver--555d85cd96--j7nqh-eth0" Oct 8 20:04:54.576397 systemd[1]: Started cri-containerd-3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa.scope - libcontainer container 3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa. Oct 8 20:04:54.685313 containerd[1468]: time="2024-10-08T20:04:54.685149903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:04:54.686118 containerd[1468]: time="2024-10-08T20:04:54.685647769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:04:54.686118 containerd[1468]: time="2024-10-08T20:04:54.685695048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:54.686118 containerd[1468]: time="2024-10-08T20:04:54.685843162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:04:54.743752 systemd[1]: Started cri-containerd-f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c.scope - libcontainer container f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c. Oct 8 20:04:54.858366 containerd[1468]: time="2024-10-08T20:04:54.857572122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555d85cd96-284s2,Uid:d94586f1-ae4a-44a1-b9c7-232aa0fefc3e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa\"" Oct 8 20:04:54.862491 containerd[1468]: time="2024-10-08T20:04:54.862199687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:04:54.885437 containerd[1468]: time="2024-10-08T20:04:54.885223824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-555d85cd96-j7nqh,Uid:44c12864-2e30-4929-b46a-e1595adedd4b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c\"" Oct 8 20:04:55.385984 systemd[1]: run-containerd-runc-k8s.io-f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c-runc.mVx4AK.mount: Deactivated successfully. Oct 8 20:04:55.428038 systemd-networkd[1372]: cali988f26e6bd3: Gained IPv6LL Oct 8 20:04:56.324304 systemd-networkd[1372]: cali495911a06c5: Gained IPv6LL Oct 8 20:04:56.360829 containerd[1468]: time="2024-10-08T20:04:56.360732199Z" level=info msg="StopPodSandbox for \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\"" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.594 [WARNING][4698] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"339880f2-88f3-4ae0-969e-5e762c1684c8", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8", Pod:"csi-node-driver-4tq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali67021dacb74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.594 [INFO][4698] k8s.go 608: Cleaning up netns ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.594 [INFO][4698] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" iface="eth0" netns="" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.594 [INFO][4698] k8s.go 615: Releasing IP address(es) ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.594 [INFO][4698] utils.go 188: Calico CNI releasing IP address ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.673 [INFO][4708] ipam_plugin.go 417: Releasing address using handleID ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.674 [INFO][4708] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.674 [INFO][4708] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.689 [WARNING][4708] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.689 [INFO][4708] ipam_plugin.go 445: Releasing address using workloadID ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.692 [INFO][4708] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:56.702598 containerd[1468]: 2024-10-08 20:04:56.695 [INFO][4698] k8s.go 621: Teardown processing complete. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.702598 containerd[1468]: time="2024-10-08T20:04:56.702538510Z" level=info msg="TearDown network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\" successfully" Oct 8 20:04:56.702598 containerd[1468]: time="2024-10-08T20:04:56.702574187Z" level=info msg="StopPodSandbox for \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\" returns successfully" Oct 8 20:04:56.706163 containerd[1468]: time="2024-10-08T20:04:56.703978312Z" level=info msg="RemovePodSandbox for \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\"" Oct 8 20:04:56.706163 containerd[1468]: time="2024-10-08T20:04:56.704084677Z" level=info msg="Forcibly stopping sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\"" Oct 8 20:04:56.735703 systemd[1]: Started sshd@7-10.128.0.66:22-139.178.68.195:42946.service - OpenSSH per-connection server daemon (139.178.68.195:42946). Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.864 [WARNING][4729] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"339880f2-88f3-4ae0-969e-5e762c1684c8", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"b59e148c86f3e68692bad5ba30f85a5d446c506e45528e7e235218ce08350bb8", Pod:"csi-node-driver-4tq78", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.8.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali67021dacb74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.864 [INFO][4729] k8s.go 608: Cleaning up netns ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.864 [INFO][4729] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" iface="eth0" netns="" Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.864 [INFO][4729] k8s.go 615: Releasing IP address(es) ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.864 [INFO][4729] utils.go 188: Calico CNI releasing IP address ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.923 [INFO][4736] ipam_plugin.go 417: Releasing address using handleID ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.923 [INFO][4736] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.923 [INFO][4736] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.933 [WARNING][4736] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.934 [INFO][4736] ipam_plugin.go 445: Releasing address using workloadID ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" HandleID="k8s-pod-network.49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-csi--node--driver--4tq78-eth0" Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.937 [INFO][4736] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:56.942821 containerd[1468]: 2024-10-08 20:04:56.939 [INFO][4729] k8s.go 621: Teardown processing complete. ContainerID="49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05" Oct 8 20:04:56.942821 containerd[1468]: time="2024-10-08T20:04:56.942700878Z" level=info msg="TearDown network for sandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\" successfully" Oct 8 20:04:56.951278 containerd[1468]: time="2024-10-08T20:04:56.950741799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:04:56.951278 containerd[1468]: time="2024-10-08T20:04:56.950839073Z" level=info msg="RemovePodSandbox \"49ecaad7b5c79085e7ba2e132a549ff537dbe880f36fa063229645da5128ed05\" returns successfully" Oct 8 20:04:56.951836 containerd[1468]: time="2024-10-08T20:04:56.951786253Z" level=info msg="StopPodSandbox for \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\"" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.050 [WARNING][4754] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0", GenerateName:"calico-kube-controllers-6dd5967bb4-", Namespace:"calico-system", SelfLink:"", UID:"7d345d2a-35fa-4b82-8298-e61701951a29", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd5967bb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe", Pod:"calico-kube-controllers-6dd5967bb4-bpclm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd89da254e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.050 [INFO][4754] k8s.go 608: Cleaning up netns ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.050 [INFO][4754] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" iface="eth0" netns="" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.050 [INFO][4754] k8s.go 615: Releasing IP address(es) ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.051 [INFO][4754] utils.go 188: Calico CNI releasing IP address ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.113 [INFO][4760] ipam_plugin.go 417: Releasing address using handleID ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.113 [INFO][4760] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.113 [INFO][4760] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.125 [WARNING][4760] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.125 [INFO][4760] ipam_plugin.go 445: Releasing address using workloadID ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.127 [INFO][4760] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:57.135178 containerd[1468]: 2024-10-08 20:04:57.130 [INFO][4754] k8s.go 621: Teardown processing complete. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.136202 containerd[1468]: time="2024-10-08T20:04:57.135235521Z" level=info msg="TearDown network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\" successfully" Oct 8 20:04:57.136202 containerd[1468]: time="2024-10-08T20:04:57.135270080Z" level=info msg="StopPodSandbox for \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\" returns successfully" Oct 8 20:04:57.140043 containerd[1468]: time="2024-10-08T20:04:57.138120618Z" level=info msg="RemovePodSandbox for \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\"" Oct 8 20:04:57.140043 containerd[1468]: time="2024-10-08T20:04:57.138170341Z" level=info msg="Forcibly stopping sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\"" Oct 8 20:04:57.176115 sshd[4728]: Accepted publickey for core from 139.178.68.195 port 42946 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:04:57.180619 sshd[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:04:57.197663 systemd-logind[1449]: New session 8 of user core. Oct 8 20:04:57.203253 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.295 [WARNING][4778] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0", GenerateName:"calico-kube-controllers-6dd5967bb4-", Namespace:"calico-system", SelfLink:"", UID:"7d345d2a-35fa-4b82-8298-e61701951a29", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6dd5967bb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"25aa7f5136b88c39758f5d1dc55dd95e3beaded9c22048c15e12f1e3cbf169fe", Pod:"calico-kube-controllers-6dd5967bb4-bpclm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.8.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califd89da254e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.297 [INFO][4778] k8s.go 608: Cleaning up netns ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.297 [INFO][4778] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" iface="eth0" netns="" Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.298 [INFO][4778] k8s.go 615: Releasing IP address(es) ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.299 [INFO][4778] utils.go 188: Calico CNI releasing IP address ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.376 [INFO][4785] ipam_plugin.go 417: Releasing address using handleID ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.377 [INFO][4785] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.377 [INFO][4785] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.397 [WARNING][4785] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.397 [INFO][4785] ipam_plugin.go 445: Releasing address using workloadID ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" HandleID="k8s-pod-network.01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-calico--kube--controllers--6dd5967bb4--bpclm-eth0" Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.400 [INFO][4785] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:57.409271 containerd[1468]: 2024-10-08 20:04:57.405 [INFO][4778] k8s.go 621: Teardown processing complete. ContainerID="01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66" Oct 8 20:04:57.409271 containerd[1468]: time="2024-10-08T20:04:57.408449632Z" level=info msg="TearDown network for sandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\" successfully" Oct 8 20:04:57.424093 containerd[1468]: time="2024-10-08T20:04:57.423555167Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:04:57.424093 containerd[1468]: time="2024-10-08T20:04:57.423710958Z" level=info msg="RemovePodSandbox \"01d867e172d3cfc9b586ea49b5c5d827e4622c44995a093e8585ebf27d6a9f66\" returns successfully" Oct 8 20:04:57.425062 containerd[1468]: time="2024-10-08T20:04:57.424468760Z" level=info msg="StopPodSandbox for \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\"" Oct 8 20:04:57.712955 sshd[4728]: pam_unix(sshd:session): session closed for user core Oct 8 20:04:57.725459 systemd[1]: sshd@7-10.128.0.66:22-139.178.68.195:42946.service: Deactivated successfully. Oct 8 20:04:57.734445 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:04:57.737770 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.588 [WARNING][4812] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc1181be-ac87-4d2f-a808-b62c5fc38f5a", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa", Pod:"coredns-7db6d8ff4d-g7x9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib585b99e978", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.589 [INFO][4812] k8s.go 608: Cleaning up netns ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.589 [INFO][4812] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" iface="eth0" netns="" Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.589 [INFO][4812] k8s.go 615: Releasing IP address(es) ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.589 [INFO][4812] utils.go 188: Calico CNI releasing IP address ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.688 [INFO][4819] ipam_plugin.go 417: Releasing address using handleID ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.688 [INFO][4819] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.688 [INFO][4819] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.712 [WARNING][4819] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.712 [INFO][4819] ipam_plugin.go 445: Releasing address using workloadID ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.723 [INFO][4819] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:57.740776 containerd[1468]: 2024-10-08 20:04:57.732 [INFO][4812] k8s.go 621: Teardown processing complete. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:57.743487 containerd[1468]: time="2024-10-08T20:04:57.740834496Z" level=info msg="TearDown network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\" successfully" Oct 8 20:04:57.743487 containerd[1468]: time="2024-10-08T20:04:57.740868902Z" level=info msg="StopPodSandbox for \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\" returns successfully" Oct 8 20:04:57.741689 systemd-logind[1449]: Removed session 8. Oct 8 20:04:57.744954 containerd[1468]: time="2024-10-08T20:04:57.744552587Z" level=info msg="RemovePodSandbox for \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\"" Oct 8 20:04:57.744954 containerd[1468]: time="2024-10-08T20:04:57.744602844Z" level=info msg="Forcibly stopping sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\"" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.897 [WARNING][4839] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bc1181be-ac87-4d2f-a808-b62c5fc38f5a", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"464ed06561b0da5ae3e54b42c3a76a6f9ff0eb6994f95ca4c7bd725a91c1a6aa", Pod:"coredns-7db6d8ff4d-g7x9j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib585b99e978", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.897 [INFO][4839] k8s.go 608: Cleaning up netns ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.898 [INFO][4839] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" iface="eth0" netns="" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.898 [INFO][4839] k8s.go 615: Releasing IP address(es) ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.898 [INFO][4839] utils.go 188: Calico CNI releasing IP address ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.975 [INFO][4846] ipam_plugin.go 417: Releasing address using handleID ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.975 [INFO][4846] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.979 [INFO][4846] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.996 [WARNING][4846] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:57.997 [INFO][4846] ipam_plugin.go 445: Releasing address using workloadID ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" HandleID="k8s-pod-network.006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--g7x9j-eth0" Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:58.000 [INFO][4846] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:58.010041 containerd[1468]: 2024-10-08 20:04:58.002 [INFO][4839] k8s.go 621: Teardown processing complete. ContainerID="006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc" Oct 8 20:04:58.014849 containerd[1468]: time="2024-10-08T20:04:58.012711546Z" level=info msg="TearDown network for sandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\" successfully" Oct 8 20:04:58.022610 containerd[1468]: time="2024-10-08T20:04:58.022560090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:04:58.022974 containerd[1468]: time="2024-10-08T20:04:58.022943510Z" level=info msg="RemovePodSandbox \"006f08c4faa76eb16940fd11af4900bbe59bbbac8aa8dbfdeb2d20fdbe4b38cc\" returns successfully" Oct 8 20:04:58.025330 containerd[1468]: time="2024-10-08T20:04:58.025293775Z" level=info msg="StopPodSandbox for \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\"" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.148 [WARNING][4864] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd", Pod:"coredns-7db6d8ff4d-j86sj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf8e829326", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.151 [INFO][4864] k8s.go 608: Cleaning up netns ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.151 [INFO][4864] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" iface="eth0" netns="" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.151 [INFO][4864] k8s.go 615: Releasing IP address(es) ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.151 [INFO][4864] utils.go 188: Calico CNI releasing IP address ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.218 [INFO][4871] ipam_plugin.go 417: Releasing address using handleID ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.219 [INFO][4871] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.219 [INFO][4871] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.242 [WARNING][4871] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.242 [INFO][4871] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.249 [INFO][4871] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:58.261394 containerd[1468]: 2024-10-08 20:04:58.255 [INFO][4864] k8s.go 621: Teardown processing complete. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.263253 containerd[1468]: time="2024-10-08T20:04:58.262881441Z" level=info msg="TearDown network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\" successfully" Oct 8 20:04:58.263253 containerd[1468]: time="2024-10-08T20:04:58.262933657Z" level=info msg="StopPodSandbox for \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\" returns successfully" Oct 8 20:04:58.266121 containerd[1468]: time="2024-10-08T20:04:58.265537079Z" level=info msg="RemovePodSandbox for \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\"" Oct 8 20:04:58.266121 containerd[1468]: time="2024-10-08T20:04:58.265614661Z" level=info msg="Forcibly stopping sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\"" Oct 8 20:04:58.371768 ntpd[1436]: Listen normally on 13 cali988f26e6bd3 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 20:04:58.373337 ntpd[1436]: 8 Oct 20:04:58 ntpd[1436]: Listen normally on 13 cali988f26e6bd3 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 20:04:58.373337 ntpd[1436]: 8 Oct 20:04:58 ntpd[1436]: Listen normally on 14 cali495911a06c5 [fe80::ecee:eeff:feee:eeee%12]:123 Oct 8 20:04:58.371877 ntpd[1436]: Listen normally on 14 cali495911a06c5 [fe80::ecee:eeff:feee:eeee%12]:123 Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.407 [WARNING][4892] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e9a17030-3a1d-4d27-94e8-64bccf5f8ba4", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 4, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-1-0-7c8d7ab9cda1ea1b69bb.c.flatcar-212911.internal", ContainerID:"fb058b514b539dca002af7fa564e3c56fdf1d98051135c065aab8105d30783cd", Pod:"coredns-7db6d8ff4d-j86sj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.8.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliddf8e829326", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.409 [INFO][4892] k8s.go 608: Cleaning up netns ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.409 [INFO][4892] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" iface="eth0" netns="" Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.409 [INFO][4892] k8s.go 615: Releasing IP address(es) ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.410 [INFO][4892] utils.go 188: Calico CNI releasing IP address ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.469 [INFO][4913] ipam_plugin.go 417: Releasing address using handleID ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.470 [INFO][4913] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.470 [INFO][4913] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.491 [WARNING][4913] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.491 [INFO][4913] ipam_plugin.go 445: Releasing address using workloadID ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" HandleID="k8s-pod-network.b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Workload="ci--4081--1--0--7c8d7ab9cda1ea1b69bb.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--j86sj-eth0" Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.494 [INFO][4913] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:04:58.503719 containerd[1468]: 2024-10-08 20:04:58.498 [INFO][4892] k8s.go 621: Teardown processing complete. ContainerID="b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814" Oct 8 20:04:58.505982 containerd[1468]: time="2024-10-08T20:04:58.503003749Z" level=info msg="TearDown network for sandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\" successfully" Oct 8 20:04:58.516696 containerd[1468]: time="2024-10-08T20:04:58.515508322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:04:58.516696 containerd[1468]: time="2024-10-08T20:04:58.515805404Z" level=info msg="RemovePodSandbox \"b38e24821b44881c182c00f6da8445a77b96a7ea6056e5aefe8f9254b2ff4814\" returns successfully" Oct 8 20:04:58.727553 containerd[1468]: time="2024-10-08T20:04:58.727473108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:58.730273 containerd[1468]: time="2024-10-08T20:04:58.729862297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 8 20:04:58.733038 containerd[1468]: time="2024-10-08T20:04:58.731723619Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:58.737461 containerd[1468]: time="2024-10-08T20:04:58.737403821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:58.739762 containerd[1468]: time="2024-10-08T20:04:58.739715449Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 3.877458072s" Oct 8 20:04:58.739975 containerd[1468]: time="2024-10-08T20:04:58.739947161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:04:58.742850 containerd[1468]: time="2024-10-08T20:04:58.742790988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:04:58.744973 containerd[1468]: time="2024-10-08T20:04:58.744924195Z" level=info msg="CreateContainer within sandbox \"3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:04:58.768486 containerd[1468]: time="2024-10-08T20:04:58.768336502Z" level=info msg="CreateContainer within sandbox \"3752f695ae2da131277677c6b887ad8c8630ac883ad73dac28c5251b079b04fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8af985d55e5e11664d320a231fa27f46dfdec43644f8cec0498746d427f82a9e\"" Oct 8 20:04:58.772059 containerd[1468]: time="2024-10-08T20:04:58.769376388Z" level=info msg="StartContainer for \"8af985d55e5e11664d320a231fa27f46dfdec43644f8cec0498746d427f82a9e\"" Oct 8 20:04:58.780360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728693468.mount: Deactivated successfully. Oct 8 20:04:58.840315 systemd[1]: Started cri-containerd-8af985d55e5e11664d320a231fa27f46dfdec43644f8cec0498746d427f82a9e.scope - libcontainer container 8af985d55e5e11664d320a231fa27f46dfdec43644f8cec0498746d427f82a9e. Oct 8 20:04:58.919885 containerd[1468]: time="2024-10-08T20:04:58.919833208Z" level=info msg="StartContainer for \"8af985d55e5e11664d320a231fa27f46dfdec43644f8cec0498746d427f82a9e\" returns successfully" Oct 8 20:04:58.969577 containerd[1468]: time="2024-10-08T20:04:58.969467490Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:04:58.969577 containerd[1468]: time="2024-10-08T20:04:58.970464204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 20:04:58.975139 containerd[1468]: time="2024-10-08T20:04:58.975095023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 232.024253ms" Oct 8 20:04:58.975341 containerd[1468]: time="2024-10-08T20:04:58.975317329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 8 20:04:58.978838 containerd[1468]: time="2024-10-08T20:04:58.978790811Z" level=info msg="CreateContainer within sandbox \"f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:04:59.005170 containerd[1468]: time="2024-10-08T20:04:59.005080528Z" level=info msg="CreateContainer within sandbox \"f868f7c2e845c86c30fbb28c9d5c9411b6b0e8999a0abc4542303acbe199ae5c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c8660457933824cba1b80c74aad0f5bbcaccdc1ce48d3609e842c431abb7c2a8\"" Oct 8 20:04:59.006356 containerd[1468]: time="2024-10-08T20:04:59.006298503Z" level=info msg="StartContainer for \"c8660457933824cba1b80c74aad0f5bbcaccdc1ce48d3609e842c431abb7c2a8\"" Oct 8 20:04:59.060480 systemd[1]: Started cri-containerd-c8660457933824cba1b80c74aad0f5bbcaccdc1ce48d3609e842c431abb7c2a8.scope - libcontainer container c8660457933824cba1b80c74aad0f5bbcaccdc1ce48d3609e842c431abb7c2a8. Oct 8 20:04:59.143865 containerd[1468]: time="2024-10-08T20:04:59.142398888Z" level=info msg="StartContainer for \"c8660457933824cba1b80c74aad0f5bbcaccdc1ce48d3609e842c431abb7c2a8\" returns successfully" Oct 8 20:04:59.958126 kubelet[2620]: I1008 20:04:59.957973 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-555d85cd96-284s2" podStartSLOduration=3.078045495 podStartE2EDuration="6.957920551s" podCreationTimestamp="2024-10-08 20:04:53 +0000 UTC" firstStartedPulling="2024-10-08 20:04:54.861600047 +0000 UTC m=+58.691648628" lastFinishedPulling="2024-10-08 20:04:58.741475099 +0000 UTC m=+62.571523684" observedRunningTime="2024-10-08 20:04:59.955208735 +0000 UTC m=+63.785257325" watchObservedRunningTime="2024-10-08 20:04:59.957920551 +0000 UTC m=+63.787969150" Oct 8 20:04:59.959476 kubelet[2620]: I1008 20:04:59.959150 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-555d85cd96-j7nqh" podStartSLOduration=2.871822002 podStartE2EDuration="6.959120077s" podCreationTimestamp="2024-10-08 20:04:53 +0000 UTC" firstStartedPulling="2024-10-08 20:04:54.889059305 +0000 UTC m=+58.719107877" lastFinishedPulling="2024-10-08 20:04:58.97635737 +0000 UTC m=+62.806405952" observedRunningTime="2024-10-08 20:04:59.928504016 +0000 UTC m=+63.758552608" watchObservedRunningTime="2024-10-08 20:04:59.959120077 +0000 UTC m=+63.789168668" Oct 8 20:05:00.917795 kubelet[2620]: I1008 20:05:00.917746 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:05:02.783523 systemd[1]: Started sshd@8-10.128.0.66:22-139.178.68.195:33600.service - OpenSSH per-connection server daemon (139.178.68.195:33600). Oct 8 20:05:03.175426 sshd[5022]: Accepted publickey for core from 139.178.68.195 port 33600 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:03.177778 sshd[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:03.187792 systemd-logind[1449]: New session 9 of user core. Oct 8 20:05:03.192204 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:05:03.543086 sshd[5022]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:03.552994 systemd[1]: sshd@8-10.128.0.66:22-139.178.68.195:33600.service: Deactivated successfully. Oct 8 20:05:03.557580 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:05:03.560847 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:05:03.563113 systemd-logind[1449]: Removed session 9. Oct 8 20:05:08.614932 systemd[1]: Started sshd@9-10.128.0.66:22-139.178.68.195:33602.service - OpenSSH per-connection server daemon (139.178.68.195:33602). Oct 8 20:05:09.002510 sshd[5040]: Accepted publickey for core from 139.178.68.195 port 33602 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:09.004852 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:09.012069 systemd-logind[1449]: New session 10 of user core. Oct 8 20:05:09.020359 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:05:09.354471 sshd[5040]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:09.360195 systemd[1]: sshd@9-10.128.0.66:22-139.178.68.195:33602.service: Deactivated successfully. Oct 8 20:05:09.362915 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:05:09.363949 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:05:09.365670 systemd-logind[1449]: Removed session 10. Oct 8 20:05:09.425480 systemd[1]: Started sshd@10-10.128.0.66:22-139.178.68.195:33616.service - OpenSSH per-connection server daemon (139.178.68.195:33616). Oct 8 20:05:09.803624 sshd[5054]: Accepted publickey for core from 139.178.68.195 port 33616 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:09.806318 sshd[5054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:09.813917 systemd-logind[1449]: New session 11 of user core. Oct 8 20:05:09.820309 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:05:10.212419 sshd[5054]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:10.218369 systemd[1]: sshd@10-10.128.0.66:22-139.178.68.195:33616.service: Deactivated successfully. Oct 8 20:05:10.222555 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:05:10.225036 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:05:10.227120 systemd-logind[1449]: Removed session 11. Oct 8 20:05:10.286545 systemd[1]: Started sshd@11-10.128.0.66:22-139.178.68.195:33622.service - OpenSSH per-connection server daemon (139.178.68.195:33622). Oct 8 20:05:10.674625 sshd[5065]: Accepted publickey for core from 139.178.68.195 port 33622 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:10.676847 sshd[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:10.684876 systemd-logind[1449]: New session 12 of user core. Oct 8 20:05:10.689458 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:05:11.031485 sshd[5065]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:11.041637 systemd[1]: sshd@11-10.128.0.66:22-139.178.68.195:33622.service: Deactivated successfully. Oct 8 20:05:11.045282 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:05:11.046953 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:05:11.048657 systemd-logind[1449]: Removed session 12. Oct 8 20:05:16.103525 systemd[1]: Started sshd@12-10.128.0.66:22-139.178.68.195:49530.service - OpenSSH per-connection server daemon (139.178.68.195:49530). Oct 8 20:05:16.492212 sshd[5091]: Accepted publickey for core from 139.178.68.195 port 49530 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:16.494302 sshd[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:16.501891 systemd-logind[1449]: New session 13 of user core. Oct 8 20:05:16.508270 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:05:16.855675 sshd[5091]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:16.861406 systemd[1]: sshd@12-10.128.0.66:22-139.178.68.195:49530.service: Deactivated successfully. Oct 8 20:05:16.864656 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:05:16.865950 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:05:16.868951 systemd-logind[1449]: Removed session 13. Oct 8 20:05:17.933358 kubelet[2620]: I1008 20:05:17.932785 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:05:21.931589 systemd[1]: Started sshd@13-10.128.0.66:22-139.178.68.195:57614.service - OpenSSH per-connection server daemon (139.178.68.195:57614). Oct 8 20:05:22.327924 sshd[5160]: Accepted publickey for core from 139.178.68.195 port 57614 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:22.331773 sshd[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:22.341956 systemd-logind[1449]: New session 14 of user core. Oct 8 20:05:22.352305 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:05:22.710097 sshd[5160]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:22.714841 systemd[1]: sshd@13-10.128.0.66:22-139.178.68.195:57614.service: Deactivated successfully. Oct 8 20:05:22.718256 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:05:22.720666 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:05:22.722341 systemd-logind[1449]: Removed session 14. Oct 8 20:05:27.787509 systemd[1]: Started sshd@14-10.128.0.66:22-139.178.68.195:57626.service - OpenSSH per-connection server daemon (139.178.68.195:57626). Oct 8 20:05:28.174977 sshd[5173]: Accepted publickey for core from 139.178.68.195 port 57626 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:28.177676 sshd[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:28.185375 systemd-logind[1449]: New session 15 of user core. Oct 8 20:05:28.192318 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:05:28.621929 sshd[5173]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:28.635512 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:05:28.636613 systemd[1]: sshd@14-10.128.0.66:22-139.178.68.195:57626.service: Deactivated successfully. Oct 8 20:05:28.643140 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:05:28.648658 systemd-logind[1449]: Removed session 15. Oct 8 20:05:33.689460 systemd[1]: Started sshd@15-10.128.0.66:22-139.178.68.195:47164.service - OpenSSH per-connection server daemon (139.178.68.195:47164). Oct 8 20:05:34.068855 sshd[5209]: Accepted publickey for core from 139.178.68.195 port 47164 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:34.070838 sshd[5209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:34.076697 systemd-logind[1449]: New session 16 of user core. Oct 8 20:05:34.087338 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:05:34.421645 sshd[5209]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:34.428987 systemd[1]: sshd@15-10.128.0.66:22-139.178.68.195:47164.service: Deactivated successfully. Oct 8 20:05:34.432576 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:05:34.434279 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:05:34.435934 systemd-logind[1449]: Removed session 16. Oct 8 20:05:34.496786 systemd[1]: Started sshd@16-10.128.0.66:22-139.178.68.195:47166.service - OpenSSH per-connection server daemon (139.178.68.195:47166). Oct 8 20:05:34.875888 sshd[5222]: Accepted publickey for core from 139.178.68.195 port 47166 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:34.878364 sshd[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:34.884655 systemd-logind[1449]: New session 17 of user core. Oct 8 20:05:34.893346 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:05:35.314388 sshd[5222]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:35.320727 systemd[1]: sshd@16-10.128.0.66:22-139.178.68.195:47166.service: Deactivated successfully. Oct 8 20:05:35.324568 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:05:35.328522 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:05:35.331198 systemd-logind[1449]: Removed session 17. Oct 8 20:05:35.385501 systemd[1]: Started sshd@17-10.128.0.66:22-139.178.68.195:47180.service - OpenSSH per-connection server daemon (139.178.68.195:47180). Oct 8 20:05:35.776841 sshd[5235]: Accepted publickey for core from 139.178.68.195 port 47180 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:35.779357 sshd[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:35.788921 systemd-logind[1449]: New session 18 of user core. Oct 8 20:05:35.793302 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:05:37.999736 sshd[5235]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:38.007198 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:05:38.007871 systemd[1]: sshd@17-10.128.0.66:22-139.178.68.195:47180.service: Deactivated successfully. Oct 8 20:05:38.013619 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:05:38.018422 systemd-logind[1449]: Removed session 18. Oct 8 20:05:38.070497 systemd[1]: Started sshd@18-10.128.0.66:22-139.178.68.195:47194.service - OpenSSH per-connection server daemon (139.178.68.195:47194). Oct 8 20:05:38.466476 sshd[5253]: Accepted publickey for core from 139.178.68.195 port 47194 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:38.468964 sshd[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:38.482099 systemd-logind[1449]: New session 19 of user core. Oct 8 20:05:38.492676 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:05:38.991756 sshd[5253]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:38.998831 systemd[1]: sshd@18-10.128.0.66:22-139.178.68.195:47194.service: Deactivated successfully. Oct 8 20:05:39.001885 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:05:39.003514 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:05:39.005146 systemd-logind[1449]: Removed session 19. Oct 8 20:05:39.058055 systemd[1]: Started sshd@19-10.128.0.66:22-139.178.68.195:47202.service - OpenSSH per-connection server daemon (139.178.68.195:47202). Oct 8 20:05:39.432233 sshd[5264]: Accepted publickey for core from 139.178.68.195 port 47202 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:39.434364 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:39.440993 systemd-logind[1449]: New session 20 of user core. Oct 8 20:05:39.447262 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 20:05:39.772227 sshd[5264]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:39.777661 systemd[1]: sshd@19-10.128.0.66:22-139.178.68.195:47202.service: Deactivated successfully. Oct 8 20:05:39.781089 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 20:05:39.783340 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Oct 8 20:05:39.785321 systemd-logind[1449]: Removed session 20. Oct 8 20:05:44.855519 systemd[1]: Started sshd@20-10.128.0.66:22-139.178.68.195:55772.service - OpenSSH per-connection server daemon (139.178.68.195:55772). Oct 8 20:05:45.243756 sshd[5287]: Accepted publickey for core from 139.178.68.195 port 55772 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:45.246101 sshd[5287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:45.252770 systemd-logind[1449]: New session 21 of user core. Oct 8 20:05:45.258265 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 20:05:45.599584 sshd[5287]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:45.604704 systemd[1]: sshd@20-10.128.0.66:22-139.178.68.195:55772.service: Deactivated successfully. Oct 8 20:05:45.608647 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 20:05:45.611084 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Oct 8 20:05:45.613145 systemd-logind[1449]: Removed session 21. Oct 8 20:05:50.681258 systemd[1]: Started sshd@21-10.128.0.66:22-139.178.68.195:33286.service - OpenSSH per-connection server daemon (139.178.68.195:33286). Oct 8 20:05:51.074568 sshd[5322]: Accepted publickey for core from 139.178.68.195 port 33286 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:51.076940 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:51.084768 systemd-logind[1449]: New session 22 of user core. Oct 8 20:05:51.090259 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 20:05:51.470588 sshd[5322]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:51.477642 systemd[1]: sshd@21-10.128.0.66:22-139.178.68.195:33286.service: Deactivated successfully. Oct 8 20:05:51.481083 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 20:05:51.482229 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Oct 8 20:05:51.484203 systemd-logind[1449]: Removed session 22. Oct 8 20:05:56.540372 systemd[1]: Started sshd@22-10.128.0.66:22-139.178.68.195:33292.service - OpenSSH per-connection server daemon (139.178.68.195:33292). Oct 8 20:05:56.917766 sshd[5343]: Accepted publickey for core from 139.178.68.195 port 33292 ssh2: RSA SHA256:4XCeHSiyjLVBMobsx2LbnZLh2N154hXZugeS4dPAXUI Oct 8 20:05:56.919841 sshd[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 20:05:56.926896 systemd-logind[1449]: New session 23 of user core. Oct 8 20:05:56.931308 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 20:05:57.273584 sshd[5343]: pam_unix(sshd:session): session closed for user core Oct 8 20:05:57.280656 systemd[1]: sshd@22-10.128.0.66:22-139.178.68.195:33292.service: Deactivated successfully. Oct 8 20:05:57.283463 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 20:05:57.284519 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Oct 8 20:05:57.286836 systemd-logind[1449]: Removed session 23.