Nov 12 20:55:11.075593 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:55:11.075637 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:11.075655 kernel: BIOS-provided physical RAM map: Nov 12 20:55:11.075670 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 12 20:55:11.075684 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 12 20:55:11.075706 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 12 20:55:11.075723 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 12 20:55:11.075743 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 12 20:55:11.075757 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 12 20:55:11.075771 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 12 20:55:11.075786 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 12 20:55:11.075801 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 12 20:55:11.075816 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 12 20:55:11.075831 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 12 20:55:11.075854 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 12 20:55:11.075871 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 12 20:55:11.075887 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 12 20:55:11.075904 kernel: NX (Execute Disable) protection: active Nov 12 20:55:11.075921 kernel: APIC: Static calls initialized Nov 12 20:55:11.075937 kernel: efi: EFI v2.7 by EDK II Nov 12 20:55:11.075954 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Nov 12 20:55:11.075969 kernel: SMBIOS 2.4 present. Nov 12 20:55:11.075982 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Nov 12 20:55:11.075999 kernel: Hypervisor detected: KVM Nov 12 20:55:11.076028 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:55:11.076045 kernel: kvm-clock: using sched offset of 11659743555 cycles Nov 12 20:55:11.076061 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:55:11.076084 kernel: tsc: Detected 2299.998 MHz processor Nov 12 20:55:11.076100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:55:11.076116 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:55:11.076134 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 12 20:55:11.076151 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 12 20:55:11.076167 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:55:11.076187 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 12 20:55:11.076204 kernel: Using GB pages for direct mapping Nov 12 20:55:11.076220 kernel: Secure boot disabled Nov 12 20:55:11.076237 kernel: ACPI: Early table checksum verification disabled Nov 12 20:55:11.076253 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 12 20:55:11.076269 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 12 20:55:11.076286 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 12 20:55:11.076309 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 12 20:55:11.076329 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 12 20:55:11.076347 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Nov 12 20:55:11.076393 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 12 20:55:11.076411 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 12 20:55:11.076429 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 12 20:55:11.076447 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 12 20:55:11.076469 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 12 20:55:11.076486 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 12 20:55:11.076503 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 12 20:55:11.076519 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 12 20:55:11.076537 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 12 20:55:11.076554 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 12 20:55:11.076572 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 12 20:55:11.076589 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 12 20:55:11.076607 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 12 20:55:11.076629 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 12 20:55:11.076646 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:55:11.076663 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:55:11.076681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:55:11.076699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 12 20:55:11.076716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 12 20:55:11.076734 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 12 20:55:11.076753 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 12 20:55:11.076769 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 12 20:55:11.076792 kernel: Zone ranges: Nov 12 20:55:11.076812 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:55:11.076829 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:55:11.076847 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:55:11.076866 kernel: Movable zone start for each node Nov 12 20:55:11.076884 kernel: Early memory node ranges Nov 12 20:55:11.076903 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 12 20:55:11.076921 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 12 20:55:11.076940 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 12 20:55:11.076964 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 12 20:55:11.076982 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:55:11.077000 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 12 20:55:11.077026 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:55:11.077045 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 12 20:55:11.077063 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 12 20:55:11.077081 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 12 20:55:11.077099 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 12 20:55:11.077118 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 12 20:55:11.077141 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:55:11.077160 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:55:11.077179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:55:11.077197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:55:11.077214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:55:11.077232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:55:11.077249 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:55:11.077265 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:55:11.077283 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:55:11.077304 kernel: Booting paravirtualized kernel on KVM Nov 12 20:55:11.077321 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:55:11.077338 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:55:11.077355 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:55:11.077389 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:55:11.077406 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:55:11.077424 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:55:11.077442 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:55:11.077462 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:11.077486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:55:11.077504 kernel: random: crng init done Nov 12 20:55:11.077522 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:55:11.077540 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:55:11.077558 kernel: Fallback order for Node 0: 0 Nov 12 20:55:11.077576 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 12 20:55:11.077594 kernel: Policy zone: Normal Nov 12 20:55:11.077612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:55:11.077633 kernel: software IO TLB: area num 2. Nov 12 20:55:11.077652 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 346940K reserved, 0K cma-reserved) Nov 12 20:55:11.077670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:55:11.077688 kernel: Kernel/User page tables isolation: enabled Nov 12 20:55:11.077706 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:55:11.077723 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:55:11.077740 kernel: Dynamic Preempt: voluntary Nov 12 20:55:11.077757 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:55:11.077777 kernel: rcu: RCU event tracing is enabled. Nov 12 20:55:11.077812 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:55:11.077832 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:55:11.077851 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:55:11.077873 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:55:11.077893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:55:11.077912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:55:11.077931 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:55:11.077950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:55:11.077970 kernel: Console: colour dummy device 80x25 Nov 12 20:55:11.077993 kernel: printk: console [ttyS0] enabled Nov 12 20:55:11.078020 kernel: ACPI: Core revision 20230628 Nov 12 20:55:11.078038 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:55:11.078057 kernel: x2apic enabled Nov 12 20:55:11.078077 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:55:11.078111 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 12 20:55:11.078131 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:55:11.078151 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 12 20:55:11.078174 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 12 20:55:11.078194 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 12 20:55:11.078214 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:55:11.078233 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 20:55:11.078260 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 20:55:11.078279 kernel: Spectre V2 : Mitigation: IBRS Nov 12 20:55:11.078298 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:55:11.078318 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:55:11.078337 kernel: RETBleed: Mitigation: IBRS Nov 12 20:55:11.078373 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:55:11.078404 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 12 20:55:11.078424 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:55:11.078443 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:55:11.078462 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:55:11.078482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:55:11.078501 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:55:11.078521 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:55:11.078540 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:55:11.078563 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:55:11.078582 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:55:11.078601 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:55:11.078621 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:55:11.078640 kernel: landlock: Up and running. Nov 12 20:55:11.078659 kernel: SELinux: Initializing. Nov 12 20:55:11.078678 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.078698 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.078717 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 12 20:55:11.078740 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:55:11.078759 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:55:11.078778 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:55:11.078798 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 12 20:55:11.078817 kernel: signal: max sigframe size: 1776 Nov 12 20:55:11.078837 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:55:11.078857 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:55:11.078876 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:55:11.078895 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:55:11.078918 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:55:11.078938 kernel: .... node #0, CPUs: #1 Nov 12 20:55:11.078957 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 12 20:55:11.078978 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:55:11.078997 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:55:11.079024 kernel: smpboot: Max logical packages: 1 Nov 12 20:55:11.079043 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 12 20:55:11.079062 kernel: devtmpfs: initialized Nov 12 20:55:11.079085 kernel: x86/mm: Memory block size: 128MB Nov 12 20:55:11.079104 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 12 20:55:11.079123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:55:11.079143 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:55:11.079162 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:55:11.079181 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:55:11.079201 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:55:11.079221 kernel: audit: type=2000 audit(1731444909.500:1): state=initialized audit_enabled=0 res=1 Nov 12 20:55:11.079240 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:55:11.079262 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:55:11.079281 kernel: cpuidle: using governor menu Nov 12 20:55:11.079300 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:55:11.079319 kernel: dca service started, version 1.12.1 Nov 12 20:55:11.079339 kernel: PCI: Using configuration type 1 for base access Nov 12 20:55:11.079370 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:55:11.079387 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:55:11.079403 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:55:11.079419 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:55:11.079443 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:55:11.079460 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:55:11.079477 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:55:11.079495 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:55:11.079513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:55:11.079531 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 12 20:55:11.079550 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:55:11.079570 kernel: ACPI: Interpreter enabled Nov 12 20:55:11.079590 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:55:11.079614 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:55:11.079633 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:55:11.079652 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:55:11.079672 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 12 20:55:11.079689 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:55:11.079953 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:55:11.080171 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:55:11.080377 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:55:11.080409 kernel: PCI host bridge to bus 0000:00 Nov 12 20:55:11.080585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:55:11.080766 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:55:11.080925 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:55:11.081090 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 12 20:55:11.081247 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:55:11.081480 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:55:11.081684 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 12 20:55:11.081877 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:55:11.082072 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 12 20:55:11.082275 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 12 20:55:11.082493 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 12 20:55:11.082688 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 12 20:55:11.082885 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:55:11.083091 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 12 20:55:11.083278 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 12 20:55:11.083529 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:55:11.083727 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 12 20:55:11.083919 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 12 20:55:11.083952 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:55:11.083972 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:55:11.083992 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:55:11.084020 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:55:11.084038 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:55:11.084057 kernel: iommu: Default domain type: Translated Nov 12 20:55:11.084074 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:55:11.084091 kernel: efivars: Registered efivars operations Nov 12 20:55:11.084108 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:55:11.084131 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:55:11.084147 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 12 20:55:11.084165 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 12 20:55:11.084183 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 12 20:55:11.084201 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 12 20:55:11.084219 kernel: vgaarb: loaded Nov 12 20:55:11.084237 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:55:11.084256 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:55:11.084274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:55:11.084295 kernel: pnp: PnP ACPI init Nov 12 20:55:11.084321 kernel: pnp: PnP ACPI: found 7 devices Nov 12 20:55:11.084339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:55:11.084373 kernel: NET: Registered PF_INET protocol family Nov 12 20:55:11.084392 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:55:11.084410 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:55:11.084429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:55:11.084447 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:55:11.084466 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:55:11.084490 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:55:11.084507 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.084535 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.084555 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:55:11.084574 kernel: NET: Registered PF_XDP protocol family Nov 12 20:55:11.084766 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:55:11.084936 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:55:11.085111 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:55:11.085279 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 12 20:55:11.085516 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:55:11.085544 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:55:11.085563 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:55:11.085584 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 12 20:55:11.085604 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:55:11.085623 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:55:11.085643 kernel: clocksource: Switched to clocksource tsc Nov 12 20:55:11.085668 kernel: Initialise system trusted keyrings Nov 12 20:55:11.085687 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:55:11.085706 kernel: Key type asymmetric registered Nov 12 20:55:11.085726 kernel: Asymmetric key parser 'x509' registered Nov 12 20:55:11.085745 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:55:11.085763 kernel: io scheduler mq-deadline registered Nov 12 20:55:11.085783 kernel: io scheduler kyber registered Nov 12 20:55:11.085802 kernel: io scheduler bfq registered Nov 12 20:55:11.085822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:55:11.085846 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:55:11.086050 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 12 20:55:11.086076 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 12 20:55:11.086280 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 12 20:55:11.086306 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:55:11.086544 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 12 20:55:11.086571 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:55:11.086590 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:55:11.086609 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:55:11.086634 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 12 20:55:11.086653 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 12 20:55:11.086845 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 12 20:55:11.086870 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:55:11.086886 kernel: i8042: Warning: Keylock active Nov 12 20:55:11.086901 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:55:11.086917 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:55:11.087121 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 12 20:55:11.087307 kernel: rtc_cmos 00:00: registered as rtc0 Nov 12 20:55:11.087495 kernel: rtc_cmos 00:00: setting system clock to 2024-11-12T20:55:10 UTC (1731444910) Nov 12 20:55:11.087660 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 12 20:55:11.087683 kernel: intel_pstate: CPU model not supported Nov 12 20:55:11.087702 kernel: pstore: Using crash dump compression: deflate Nov 12 20:55:11.087720 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:55:11.087738 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:55:11.087756 kernel: Segment Routing with IPv6 Nov 12 20:55:11.087780 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:55:11.087798 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:55:11.087817 kernel: Key type dns_resolver registered Nov 12 20:55:11.087835 kernel: IPI shorthand broadcast: enabled Nov 12 20:55:11.087853 kernel: sched_clock: Marking stable (846004104, 133897450)->(998289375, -18387821) Nov 12 20:55:11.087871 kernel: registered taskstats version 1 Nov 12 20:55:11.087889 kernel: Loading compiled-in X.509 certificates Nov 12 20:55:11.087907 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:55:11.087926 kernel: Key type .fscrypt registered Nov 12 20:55:11.087947 kernel: Key type fscrypt-provisioning registered Nov 12 20:55:11.087966 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:55:11.087984 kernel: ima: No architecture policies found Nov 12 20:55:11.088002 kernel: clk: Disabling unused clocks Nov 12 20:55:11.088028 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:55:11.088047 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:55:11.088065 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:55:11.088083 kernel: Run /init as init process Nov 12 20:55:11.088105 kernel: with arguments: Nov 12 20:55:11.088123 kernel: /init Nov 12 20:55:11.088140 kernel: with environment: Nov 12 20:55:11.088157 kernel: HOME=/ Nov 12 20:55:11.088173 kernel: TERM=linux Nov 12 20:55:11.088190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:55:11.088209 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:55:11.088231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:55:11.088256 systemd[1]: Detected virtualization google. Nov 12 20:55:11.088276 systemd[1]: Detected architecture x86-64. Nov 12 20:55:11.088295 systemd[1]: Running in initrd. Nov 12 20:55:11.088313 systemd[1]: No hostname configured, using default hostname. Nov 12 20:55:11.088331 systemd[1]: Hostname set to . Nov 12 20:55:11.088351 systemd[1]: Initializing machine ID from random generator. Nov 12 20:55:11.089187 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:55:11.089210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:11.089238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:11.089259 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:55:11.089279 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:55:11.089299 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:55:11.089318 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:55:11.089341 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:55:11.089384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:55:11.089410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:11.089431 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:11.089472 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:55:11.089496 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:55:11.089516 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:55:11.089537 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:55:11.089560 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:55:11.089580 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:55:11.089601 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:55:11.089621 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:55:11.089644 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:11.089665 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:11.089684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:11.089704 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:55:11.089725 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:55:11.089750 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:55:11.089770 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:55:11.089789 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:55:11.089810 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:55:11.089830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:55:11.089849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:11.089905 systemd-journald[183]: Collecting audit messages is disabled. Nov 12 20:55:11.089954 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:55:11.089975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:11.089994 systemd-journald[183]: Journal started Nov 12 20:55:11.090052 systemd-journald[183]: Runtime Journal (/run/log/journal/eef95b32d1604831afb41380b96a1f8b) is 8.0M, max 148.7M, 140.7M free. Nov 12 20:55:11.096179 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:55:11.100458 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:55:11.109330 systemd-modules-load[184]: Inserted module 'overlay' Nov 12 20:55:11.111608 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:55:11.115512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:55:11.132097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:11.141729 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:55:11.159419 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:55:11.160594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:11.165597 kernel: Bridge firewalling registered Nov 12 20:55:11.162027 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 12 20:55:11.169552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:55:11.170845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:11.171905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:11.186554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:55:11.189934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:11.199626 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:55:11.200560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:11.210868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:11.223574 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:55:11.245523 dracut-cmdline[214]: dracut-dracut-053 Nov 12 20:55:11.249626 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:11.278773 systemd-resolved[218]: Positive Trust Anchors: Nov 12 20:55:11.279131 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:55:11.279202 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:55:11.284247 systemd-resolved[218]: Defaulting to hostname 'linux'. Nov 12 20:55:11.285943 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:55:11.298567 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:11.354402 kernel: SCSI subsystem initialized Nov 12 20:55:11.364396 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:55:11.376397 kernel: iscsi: registered transport (tcp) Nov 12 20:55:11.399503 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:55:11.399583 kernel: QLogic iSCSI HBA Driver Nov 12 20:55:11.451310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:55:11.455623 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:55:11.495983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:55:11.496053 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:55:11.496082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:55:11.541405 kernel: raid6: avx2x4 gen() 18131 MB/s Nov 12 20:55:11.558403 kernel: raid6: avx2x2 gen() 18137 MB/s Nov 12 20:55:11.575769 kernel: raid6: avx2x1 gen() 13924 MB/s Nov 12 20:55:11.575809 kernel: raid6: using algorithm avx2x2 gen() 18137 MB/s Nov 12 20:55:11.593787 kernel: raid6: .... xor() 17633 MB/s, rmw enabled Nov 12 20:55:11.593825 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:55:11.616391 kernel: xor: automatically using best checksumming function avx Nov 12 20:55:11.790405 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:55:11.803259 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:55:11.810575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:11.843499 systemd-udevd[401]: Using default interface naming scheme 'v255'. Nov 12 20:55:11.850224 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:11.860614 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:55:11.887133 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Nov 12 20:55:11.922484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:55:11.929634 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:55:12.008087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:12.019617 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:55:12.058976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:55:12.070013 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:55:12.078504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:12.082655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:55:12.098992 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:55:12.107929 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:55:12.123380 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:55:12.128573 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 12 20:55:12.150146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:55:12.205325 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:55:12.205415 kernel: AES CTR mode by8 optimization enabled Nov 12 20:55:12.230048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:55:12.230344 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:12.237707 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:12.239188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:55:12.249014 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Nov 12 20:55:12.265732 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 12 20:55:12.265993 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 12 20:55:12.266220 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 12 20:55:12.267517 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 20:55:12.267750 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:55:12.267777 kernel: GPT:17805311 != 25165823 Nov 12 20:55:12.267800 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:55:12.267824 kernel: GPT:17805311 != 25165823 Nov 12 20:55:12.267846 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:55:12.267870 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:12.267909 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 12 20:55:12.239822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:12.255515 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:12.269505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:12.307827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:12.313960 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:12.330413 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (470) Nov 12 20:55:12.337387 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (464) Nov 12 20:55:12.353913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 12 20:55:12.371917 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:12.379249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 12 20:55:12.387124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 12 20:55:12.393952 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 12 20:55:12.398461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 12 20:55:12.410578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:55:12.429945 disk-uuid[550]: Primary Header is updated. Nov 12 20:55:12.429945 disk-uuid[550]: Secondary Entries is updated. Nov 12 20:55:12.429945 disk-uuid[550]: Secondary Header is updated. Nov 12 20:55:12.450392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:12.474382 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:12.482396 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:13.483972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:13.484051 disk-uuid[551]: The operation has completed successfully. Nov 12 20:55:13.552013 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:55:13.552167 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:55:13.581566 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:55:13.611434 sh[568]: Success Nov 12 20:55:13.633381 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:55:13.711924 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:55:13.719300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:55:13.746855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:55:13.787729 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:55:13.787793 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:13.787830 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:55:13.797167 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:55:13.803995 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:55:13.836435 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:55:13.842560 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:55:13.843543 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:55:13.849596 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:55:13.914575 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:13.914625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:13.914648 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:55:13.869705 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:55:13.963113 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:55:13.963159 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:55:13.963186 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:13.944971 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:55:13.966763 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:55:13.998623 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:55:14.073145 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:55:14.083646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:55:14.183122 systemd-networkd[751]: lo: Link UP Nov 12 20:55:14.183137 systemd-networkd[751]: lo: Gained carrier Nov 12 20:55:14.188154 systemd-networkd[751]: Enumeration completed Nov 12 20:55:14.188313 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:55:14.212800 ignition[681]: Ignition 2.19.0 Nov 12 20:55:14.188866 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:14.212812 ignition[681]: Stage: fetch-offline Nov 12 20:55:14.188875 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:55:14.212859 ignition[681]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.191189 systemd-networkd[751]: eth0: Link UP Nov 12 20:55:14.212872 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.191198 systemd-networkd[751]: eth0: Gained carrier Nov 12 20:55:14.213005 ignition[681]: parsed url from cmdline: "" Nov 12 20:55:14.191213 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:14.213015 ignition[681]: no config URL provided Nov 12 20:55:14.202468 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.109/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 12 20:55:14.213025 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:55:14.207666 systemd[1]: Reached target network.target - Network. Nov 12 20:55:14.213037 ignition[681]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:55:14.216874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:55:14.213045 ignition[681]: failed to fetch config: resource requires networking Nov 12 20:55:14.230632 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:55:14.213486 ignition[681]: Ignition finished successfully Nov 12 20:55:14.289878 unknown[759]: fetched base config from "system" Nov 12 20:55:14.279838 ignition[759]: Ignition 2.19.0 Nov 12 20:55:14.289890 unknown[759]: fetched base config from "system" Nov 12 20:55:14.279857 ignition[759]: Stage: fetch Nov 12 20:55:14.289899 unknown[759]: fetched user config from "gcp" Nov 12 20:55:14.280046 ignition[759]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.292251 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:55:14.280060 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.309583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:55:14.280202 ignition[759]: parsed url from cmdline: "" Nov 12 20:55:14.352232 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:55:14.280211 ignition[759]: no config URL provided Nov 12 20:55:14.381592 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:55:14.280219 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:55:14.416629 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:55:14.280232 ignition[759]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:55:14.420787 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:55:14.280256 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 12 20:55:14.446577 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:55:14.283949 ignition[759]: GET result: OK Nov 12 20:55:14.456640 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:55:14.284058 ignition[759]: parsing config with SHA512: 545ba5d33fd1fa66e8d9f7cb87512827d13c2e840d6c615d3cf11263f585d7167e9e940e03c86fba9ff4ea2551068e0af1acbb62ddb2e26a064b8ca28e284f07 Nov 12 20:55:14.474642 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:55:14.290540 ignition[759]: fetch: fetch complete Nov 12 20:55:14.491626 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:55:14.290549 ignition[759]: fetch: fetch passed Nov 12 20:55:14.513644 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:55:14.290616 ignition[759]: Ignition finished successfully Nov 12 20:55:14.349783 ignition[765]: Ignition 2.19.0 Nov 12 20:55:14.349790 ignition[765]: Stage: kargs Nov 12 20:55:14.349988 ignition[765]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.350000 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.351133 ignition[765]: kargs: kargs passed Nov 12 20:55:14.351183 ignition[765]: Ignition finished successfully Nov 12 20:55:14.414204 ignition[771]: Ignition 2.19.0 Nov 12 20:55:14.414213 ignition[771]: Stage: disks Nov 12 20:55:14.414442 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.414457 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.415499 ignition[771]: disks: disks passed Nov 12 20:55:14.415556 ignition[771]: Ignition finished successfully Nov 12 20:55:14.566269 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 12 20:55:14.767394 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:55:14.785501 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:55:14.904536 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:55:14.905425 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:55:14.906253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:55:14.936481 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:55:14.955487 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:55:14.979398 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Nov 12 20:55:14.979652 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:55:15.041530 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:15.041579 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:15.041604 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:55:15.041628 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:55:15.041644 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:55:14.979730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:55:14.979773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:55:15.028477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:55:15.049736 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:55:15.074716 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:55:15.192818 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:55:15.203514 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:55:15.213492 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:55:15.223501 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:55:15.345746 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:55:15.350485 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:55:15.389411 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:15.397594 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:55:15.407644 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:55:15.451881 ignition[903]: INFO : Ignition 2.19.0 Nov 12 20:55:15.451881 ignition[903]: INFO : Stage: mount Nov 12 20:55:15.476630 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:15.476630 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:15.476630 ignition[903]: INFO : mount: mount passed Nov 12 20:55:15.476630 ignition[903]: INFO : Ignition finished successfully Nov 12 20:55:15.452773 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:55:15.469824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:55:15.491507 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:55:15.607929 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (915) Nov 12 20:55:15.607964 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:15.607980 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:15.608005 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:55:15.608020 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:55:15.608035 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:55:15.515747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:55:15.611450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:55:15.663418 ignition[932]: INFO : Ignition 2.19.0 Nov 12 20:55:15.663418 ignition[932]: INFO : Stage: files Nov 12 20:55:15.677482 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:15.677482 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:15.677482 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:55:15.677482 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:55:15.676335 unknown[932]: wrote ssh authorized keys file for user: core Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:55:15.685529 systemd-networkd[751]: eth0: Gained IPv6LL Nov 12 20:55:17.890928 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:55:18.167556 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:55:18.167556 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:55:18.448160 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:55:18.792059 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.792059 ignition[932]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:55:18.831520 ignition[932]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:55:18.831520 ignition[932]: INFO : files: files passed Nov 12 20:55:18.831520 ignition[932]: INFO : Ignition finished successfully Nov 12 20:55:18.797329 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:55:18.826602 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:55:18.862565 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:55:18.877947 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:55:19.105502 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:19.105502 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:18.878063 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:55:19.160512 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:18.969128 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:55:18.970789 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:55:18.999654 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:55:19.078839 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:55:19.078963 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:55:19.098219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:55:19.115593 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:55:19.129725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:55:19.136628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:55:19.203402 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:55:19.222546 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:55:19.256481 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:19.269667 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:19.290680 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:55:19.308645 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:55:19.308826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:55:19.342689 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:55:19.363710 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:55:19.382736 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:55:19.402632 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:55:19.423696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:55:19.442640 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:55:19.460706 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:55:19.482686 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:55:19.502709 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:55:19.520709 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:55:19.538651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:55:19.538844 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:55:19.579653 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:19.600687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:19.621678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:55:19.621847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:19.642626 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:55:19.642848 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:55:19.766588 ignition[984]: INFO : Ignition 2.19.0 Nov 12 20:55:19.766588 ignition[984]: INFO : Stage: umount Nov 12 20:55:19.766588 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:19.766588 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:19.766588 ignition[984]: INFO : umount: umount passed Nov 12 20:55:19.766588 ignition[984]: INFO : Ignition finished successfully Nov 12 20:55:19.667685 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:55:19.667890 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:55:19.688737 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:55:19.688914 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:55:19.714600 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:55:19.719635 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:55:19.719829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:19.781623 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:55:19.791638 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:55:19.791822 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:19.853715 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:55:19.853888 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:55:19.887194 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:55:19.888221 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:55:19.888333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:55:19.903008 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:55:19.903117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:55:19.924491 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:55:19.924661 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:55:19.946393 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:55:19.946451 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:55:19.955793 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:55:19.955855 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:55:19.972671 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:55:19.972725 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:55:19.989767 systemd[1]: Stopped target network.target - Network. Nov 12 20:55:20.007635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:55:20.007702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:55:20.022673 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:55:20.040618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:55:20.044426 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:20.055617 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:55:20.081571 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:55:20.089705 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:55:20.089762 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:55:20.104680 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:55:20.104736 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:55:20.119658 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:55:20.119722 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:55:20.136668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:55:20.136722 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:55:20.154684 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:55:20.154739 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:55:20.171896 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:55:20.176420 systemd-networkd[751]: eth0: DHCPv6 lease lost Nov 12 20:55:20.199647 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:55:20.218068 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:55:20.218225 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:55:20.236853 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:55:20.237254 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:55:20.245005 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:55:20.245071 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:20.265480 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:55:20.717468 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 12 20:55:20.277621 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:55:20.277687 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:55:20.304724 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:55:20.304784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:20.322748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:55:20.322819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:20.350661 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:55:20.350735 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:20.374797 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:20.384114 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:55:20.384294 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:20.416804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:55:20.416926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:20.429533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:55:20.429594 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:20.446604 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:55:20.446667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:55:20.471755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:55:20.471827 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:55:20.497751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:55:20.497838 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:20.533549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:55:20.547459 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:55:20.547558 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:20.558557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:55:20.558636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:20.570112 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:55:20.570260 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:55:20.589888 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:55:20.589998 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:55:20.611059 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:55:20.632563 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:55:20.671300 systemd[1]: Switching root. Nov 12 20:55:21.040442 systemd-journald[183]: Journal stopped Nov 12 20:55:11.075593 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:55:11.075637 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:11.075655 kernel: BIOS-provided physical RAM map: Nov 12 20:55:11.075670 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 12 20:55:11.075684 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 12 20:55:11.075706 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 12 20:55:11.075723 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 12 20:55:11.075743 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 12 20:55:11.075757 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 12 20:55:11.075771 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 12 20:55:11.075786 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 12 20:55:11.075801 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 12 20:55:11.075816 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 12 20:55:11.075831 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 12 20:55:11.075854 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 12 20:55:11.075871 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 12 20:55:11.075887 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 12 20:55:11.075904 kernel: NX (Execute Disable) protection: active Nov 12 20:55:11.075921 kernel: APIC: Static calls initialized Nov 12 20:55:11.075937 kernel: efi: EFI v2.7 by EDK II Nov 12 20:55:11.075954 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Nov 12 20:55:11.075969 kernel: SMBIOS 2.4 present. Nov 12 20:55:11.075982 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Nov 12 20:55:11.075999 kernel: Hypervisor detected: KVM Nov 12 20:55:11.076028 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:55:11.076045 kernel: kvm-clock: using sched offset of 11659743555 cycles Nov 12 20:55:11.076061 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:55:11.076084 kernel: tsc: Detected 2299.998 MHz processor Nov 12 20:55:11.076100 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:55:11.076116 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:55:11.076134 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 12 20:55:11.076151 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 12 20:55:11.076167 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:55:11.076187 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 12 20:55:11.076204 kernel: Using GB pages for direct mapping Nov 12 20:55:11.076220 kernel: Secure boot disabled Nov 12 20:55:11.076237 kernel: ACPI: Early table checksum verification disabled Nov 12 20:55:11.076253 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 12 20:55:11.076269 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 12 20:55:11.076286 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 12 20:55:11.076309 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 12 20:55:11.076329 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 12 20:55:11.076347 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Nov 12 20:55:11.076393 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 12 20:55:11.076411 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 12 20:55:11.076429 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 12 20:55:11.076447 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 12 20:55:11.076469 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 12 20:55:11.076486 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 12 20:55:11.076503 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 12 20:55:11.076519 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 12 20:55:11.076537 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 12 20:55:11.076554 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 12 20:55:11.076572 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 12 20:55:11.076589 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 12 20:55:11.076607 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 12 20:55:11.076629 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 12 20:55:11.076646 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:55:11.076663 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:55:11.076681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:55:11.076699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 12 20:55:11.076716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 12 20:55:11.076734 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 12 20:55:11.076753 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 12 20:55:11.076769 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 12 20:55:11.076792 kernel: Zone ranges: Nov 12 20:55:11.076812 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:55:11.076829 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:55:11.076847 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:55:11.076866 kernel: Movable zone start for each node Nov 12 20:55:11.076884 kernel: Early memory node ranges Nov 12 20:55:11.076903 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 12 20:55:11.076921 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 12 20:55:11.076940 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 12 20:55:11.076964 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 12 20:55:11.076982 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:55:11.077000 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 12 20:55:11.077026 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:55:11.077045 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 12 20:55:11.077063 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 12 20:55:11.077081 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 12 20:55:11.077099 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 12 20:55:11.077118 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 12 20:55:11.077141 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:55:11.077160 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:55:11.077179 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:55:11.077197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:55:11.077214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:55:11.077232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:55:11.077249 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:55:11.077265 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:55:11.077283 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:55:11.077304 kernel: Booting paravirtualized kernel on KVM Nov 12 20:55:11.077321 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:55:11.077338 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:55:11.077355 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:55:11.077389 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:55:11.077406 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:55:11.077424 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:55:11.077442 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:55:11.077462 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:11.077486 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:55:11.077504 kernel: random: crng init done Nov 12 20:55:11.077522 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:55:11.077540 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:55:11.077558 kernel: Fallback order for Node 0: 0 Nov 12 20:55:11.077576 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 12 20:55:11.077594 kernel: Policy zone: Normal Nov 12 20:55:11.077612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:55:11.077633 kernel: software IO TLB: area num 2. Nov 12 20:55:11.077652 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 346940K reserved, 0K cma-reserved) Nov 12 20:55:11.077670 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:55:11.077688 kernel: Kernel/User page tables isolation: enabled Nov 12 20:55:11.077706 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:55:11.077723 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:55:11.077740 kernel: Dynamic Preempt: voluntary Nov 12 20:55:11.077757 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:55:11.077777 kernel: rcu: RCU event tracing is enabled. Nov 12 20:55:11.077812 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:55:11.077832 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:55:11.077851 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:55:11.077873 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:55:11.077893 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:55:11.077912 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:55:11.077931 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:55:11.077950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:55:11.077970 kernel: Console: colour dummy device 80x25 Nov 12 20:55:11.077993 kernel: printk: console [ttyS0] enabled Nov 12 20:55:11.078020 kernel: ACPI: Core revision 20230628 Nov 12 20:55:11.078038 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:55:11.078057 kernel: x2apic enabled Nov 12 20:55:11.078077 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:55:11.078111 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 12 20:55:11.078131 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:55:11.078151 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 12 20:55:11.078174 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 12 20:55:11.078194 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 12 20:55:11.078214 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:55:11.078233 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 20:55:11.078260 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 20:55:11.078279 kernel: Spectre V2 : Mitigation: IBRS Nov 12 20:55:11.078298 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:55:11.078318 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:55:11.078337 kernel: RETBleed: Mitigation: IBRS Nov 12 20:55:11.078373 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:55:11.078404 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 12 20:55:11.078424 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:55:11.078443 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:55:11.078462 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:55:11.078482 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:55:11.078501 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:55:11.078521 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:55:11.078540 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:55:11.078563 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:55:11.078582 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:55:11.078601 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:55:11.078621 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:55:11.078640 kernel: landlock: Up and running. Nov 12 20:55:11.078659 kernel: SELinux: Initializing. Nov 12 20:55:11.078678 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.078698 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.078717 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 12 20:55:11.078740 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:55:11.078759 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:55:11.078778 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:55:11.078798 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 12 20:55:11.078817 kernel: signal: max sigframe size: 1776 Nov 12 20:55:11.078837 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:55:11.078857 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:55:11.078876 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:55:11.078895 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:55:11.078918 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:55:11.078938 kernel: .... node #0, CPUs: #1 Nov 12 20:55:11.078957 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 12 20:55:11.078978 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:55:11.078997 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:55:11.079024 kernel: smpboot: Max logical packages: 1 Nov 12 20:55:11.079043 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 12 20:55:11.079062 kernel: devtmpfs: initialized Nov 12 20:55:11.079085 kernel: x86/mm: Memory block size: 128MB Nov 12 20:55:11.079104 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 12 20:55:11.079123 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:55:11.079143 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:55:11.079162 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:55:11.079181 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:55:11.079201 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:55:11.079221 kernel: audit: type=2000 audit(1731444909.500:1): state=initialized audit_enabled=0 res=1 Nov 12 20:55:11.079240 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:55:11.079262 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:55:11.079281 kernel: cpuidle: using governor menu Nov 12 20:55:11.079300 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:55:11.079319 kernel: dca service started, version 1.12.1 Nov 12 20:55:11.079339 kernel: PCI: Using configuration type 1 for base access Nov 12 20:55:11.079370 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:55:11.079387 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:55:11.079403 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:55:11.079419 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:55:11.079443 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:55:11.079460 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:55:11.079477 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:55:11.079495 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:55:11.079513 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:55:11.079531 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 12 20:55:11.079550 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:55:11.079570 kernel: ACPI: Interpreter enabled Nov 12 20:55:11.079590 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:55:11.079614 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:55:11.079633 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:55:11.079652 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:55:11.079672 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 12 20:55:11.079689 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:55:11.079953 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:55:11.080171 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:55:11.080377 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:55:11.080409 kernel: PCI host bridge to bus 0000:00 Nov 12 20:55:11.080585 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:55:11.080766 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:55:11.080925 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:55:11.081090 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 12 20:55:11.081247 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:55:11.081480 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:55:11.081684 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 12 20:55:11.081877 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:55:11.082072 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 12 20:55:11.082275 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 12 20:55:11.082493 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 12 20:55:11.082688 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 12 20:55:11.082885 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:55:11.083091 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 12 20:55:11.083278 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 12 20:55:11.083529 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:55:11.083727 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 12 20:55:11.083919 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 12 20:55:11.083952 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:55:11.083972 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:55:11.083992 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:55:11.084020 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:55:11.084038 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:55:11.084057 kernel: iommu: Default domain type: Translated Nov 12 20:55:11.084074 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:55:11.084091 kernel: efivars: Registered efivars operations Nov 12 20:55:11.084108 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:55:11.084131 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:55:11.084147 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 12 20:55:11.084165 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 12 20:55:11.084183 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 12 20:55:11.084201 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 12 20:55:11.084219 kernel: vgaarb: loaded Nov 12 20:55:11.084237 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:55:11.084256 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:55:11.084274 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:55:11.084295 kernel: pnp: PnP ACPI init Nov 12 20:55:11.084321 kernel: pnp: PnP ACPI: found 7 devices Nov 12 20:55:11.084339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:55:11.084373 kernel: NET: Registered PF_INET protocol family Nov 12 20:55:11.084392 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:55:11.084410 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:55:11.084429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:55:11.084447 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:55:11.084466 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:55:11.084490 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:55:11.084507 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.084535 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:55:11.084555 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:55:11.084574 kernel: NET: Registered PF_XDP protocol family Nov 12 20:55:11.084766 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:55:11.084936 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:55:11.085111 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:55:11.085279 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 12 20:55:11.085516 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:55:11.085544 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:55:11.085563 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:55:11.085584 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 12 20:55:11.085604 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:55:11.085623 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:55:11.085643 kernel: clocksource: Switched to clocksource tsc Nov 12 20:55:11.085668 kernel: Initialise system trusted keyrings Nov 12 20:55:11.085687 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:55:11.085706 kernel: Key type asymmetric registered Nov 12 20:55:11.085726 kernel: Asymmetric key parser 'x509' registered Nov 12 20:55:11.085745 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:55:11.085763 kernel: io scheduler mq-deadline registered Nov 12 20:55:11.085783 kernel: io scheduler kyber registered Nov 12 20:55:11.085802 kernel: io scheduler bfq registered Nov 12 20:55:11.085822 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:55:11.085846 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:55:11.086050 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 12 20:55:11.086076 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 12 20:55:11.086280 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 12 20:55:11.086306 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:55:11.086544 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 12 20:55:11.086571 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:55:11.086590 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:55:11.086609 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:55:11.086634 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 12 20:55:11.086653 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 12 20:55:11.086845 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 12 20:55:11.086870 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:55:11.086886 kernel: i8042: Warning: Keylock active Nov 12 20:55:11.086901 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:55:11.086917 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:55:11.087121 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 12 20:55:11.087307 kernel: rtc_cmos 00:00: registered as rtc0 Nov 12 20:55:11.087495 kernel: rtc_cmos 00:00: setting system clock to 2024-11-12T20:55:10 UTC (1731444910) Nov 12 20:55:11.087660 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 12 20:55:11.087683 kernel: intel_pstate: CPU model not supported Nov 12 20:55:11.087702 kernel: pstore: Using crash dump compression: deflate Nov 12 20:55:11.087720 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:55:11.087738 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:55:11.087756 kernel: Segment Routing with IPv6 Nov 12 20:55:11.087780 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:55:11.087798 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:55:11.087817 kernel: Key type dns_resolver registered Nov 12 20:55:11.087835 kernel: IPI shorthand broadcast: enabled Nov 12 20:55:11.087853 kernel: sched_clock: Marking stable (846004104, 133897450)->(998289375, -18387821) Nov 12 20:55:11.087871 kernel: registered taskstats version 1 Nov 12 20:55:11.087889 kernel: Loading compiled-in X.509 certificates Nov 12 20:55:11.087907 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:55:11.087926 kernel: Key type .fscrypt registered Nov 12 20:55:11.087947 kernel: Key type fscrypt-provisioning registered Nov 12 20:55:11.087966 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:55:11.087984 kernel: ima: No architecture policies found Nov 12 20:55:11.088002 kernel: clk: Disabling unused clocks Nov 12 20:55:11.088028 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:55:11.088047 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:55:11.088065 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:55:11.088083 kernel: Run /init as init process Nov 12 20:55:11.088105 kernel: with arguments: Nov 12 20:55:11.088123 kernel: /init Nov 12 20:55:11.088140 kernel: with environment: Nov 12 20:55:11.088157 kernel: HOME=/ Nov 12 20:55:11.088173 kernel: TERM=linux Nov 12 20:55:11.088190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:55:11.088209 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:55:11.088231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:55:11.088256 systemd[1]: Detected virtualization google. Nov 12 20:55:11.088276 systemd[1]: Detected architecture x86-64. Nov 12 20:55:11.088295 systemd[1]: Running in initrd. Nov 12 20:55:11.088313 systemd[1]: No hostname configured, using default hostname. Nov 12 20:55:11.088331 systemd[1]: Hostname set to . Nov 12 20:55:11.088351 systemd[1]: Initializing machine ID from random generator. Nov 12 20:55:11.089187 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:55:11.089210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:11.089238 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:11.089259 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:55:11.089279 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:55:11.089299 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:55:11.089318 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:55:11.089341 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:55:11.089384 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:55:11.089410 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:11.089431 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:11.089472 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:55:11.089496 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:55:11.089516 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:55:11.089537 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:55:11.089560 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:55:11.089580 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:55:11.089601 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:55:11.089621 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:55:11.089644 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:11.089665 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:11.089684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:11.089704 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:55:11.089725 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:55:11.089750 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:55:11.089770 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:55:11.089789 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:55:11.089810 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:55:11.089830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:55:11.089849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:11.089905 systemd-journald[183]: Collecting audit messages is disabled. Nov 12 20:55:11.089954 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:55:11.089975 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:11.089994 systemd-journald[183]: Journal started Nov 12 20:55:11.090052 systemd-journald[183]: Runtime Journal (/run/log/journal/eef95b32d1604831afb41380b96a1f8b) is 8.0M, max 148.7M, 140.7M free. Nov 12 20:55:11.096179 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:55:11.100458 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:55:11.109330 systemd-modules-load[184]: Inserted module 'overlay' Nov 12 20:55:11.111608 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:55:11.115512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:55:11.132097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:11.141729 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:55:11.159419 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:55:11.160594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:11.165597 kernel: Bridge firewalling registered Nov 12 20:55:11.162027 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 12 20:55:11.169552 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:55:11.170845 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:11.171905 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:11.186554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:55:11.189934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:11.199626 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:55:11.200560 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:11.210868 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:11.223574 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:55:11.245523 dracut-cmdline[214]: dracut-dracut-053 Nov 12 20:55:11.249626 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:55:11.278773 systemd-resolved[218]: Positive Trust Anchors: Nov 12 20:55:11.279131 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:55:11.279202 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:55:11.284247 systemd-resolved[218]: Defaulting to hostname 'linux'. Nov 12 20:55:11.285943 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:55:11.298567 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:11.354402 kernel: SCSI subsystem initialized Nov 12 20:55:11.364396 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:55:11.376397 kernel: iscsi: registered transport (tcp) Nov 12 20:55:11.399503 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:55:11.399583 kernel: QLogic iSCSI HBA Driver Nov 12 20:55:11.451310 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:55:11.455623 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:55:11.495983 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:55:11.496053 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:55:11.496082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:55:11.541405 kernel: raid6: avx2x4 gen() 18131 MB/s Nov 12 20:55:11.558403 kernel: raid6: avx2x2 gen() 18137 MB/s Nov 12 20:55:11.575769 kernel: raid6: avx2x1 gen() 13924 MB/s Nov 12 20:55:11.575809 kernel: raid6: using algorithm avx2x2 gen() 18137 MB/s Nov 12 20:55:11.593787 kernel: raid6: .... xor() 17633 MB/s, rmw enabled Nov 12 20:55:11.593825 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:55:11.616391 kernel: xor: automatically using best checksumming function avx Nov 12 20:55:11.790405 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:55:11.803259 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:55:11.810575 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:11.843499 systemd-udevd[401]: Using default interface naming scheme 'v255'. Nov 12 20:55:11.850224 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:11.860614 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:55:11.887133 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Nov 12 20:55:11.922484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:55:11.929634 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:55:12.008087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:12.019617 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:55:12.058976 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:55:12.070013 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:55:12.078504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:12.082655 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:55:12.098992 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:55:12.107929 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:55:12.123380 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:55:12.128573 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 12 20:55:12.150146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:55:12.205325 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:55:12.205415 kernel: AES CTR mode by8 optimization enabled Nov 12 20:55:12.230048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:55:12.230344 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:12.237707 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:12.239188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:55:12.249014 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Nov 12 20:55:12.265732 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 12 20:55:12.265993 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 12 20:55:12.266220 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 12 20:55:12.267517 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 20:55:12.267750 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:55:12.267777 kernel: GPT:17805311 != 25165823 Nov 12 20:55:12.267800 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:55:12.267824 kernel: GPT:17805311 != 25165823 Nov 12 20:55:12.267846 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:55:12.267870 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:12.267909 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 12 20:55:12.239822 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:12.255515 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:12.269505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:12.307827 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:12.313960 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:55:12.330413 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (470) Nov 12 20:55:12.337387 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (464) Nov 12 20:55:12.353913 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 12 20:55:12.371917 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:12.379249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 12 20:55:12.387124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 12 20:55:12.393952 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 12 20:55:12.398461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 12 20:55:12.410578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:55:12.429945 disk-uuid[550]: Primary Header is updated. Nov 12 20:55:12.429945 disk-uuid[550]: Secondary Entries is updated. Nov 12 20:55:12.429945 disk-uuid[550]: Secondary Header is updated. Nov 12 20:55:12.450392 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:12.474382 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:12.482396 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:13.483972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:55:13.484051 disk-uuid[551]: The operation has completed successfully. Nov 12 20:55:13.552013 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:55:13.552167 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:55:13.581566 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:55:13.611434 sh[568]: Success Nov 12 20:55:13.633381 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:55:13.711924 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:55:13.719300 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:55:13.746855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:55:13.787729 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:55:13.787793 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:13.787830 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:55:13.797167 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:55:13.803995 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:55:13.836435 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:55:13.842560 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:55:13.843543 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:55:13.849596 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:55:13.914575 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:13.914625 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:13.914648 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:55:13.869705 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:55:13.963113 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:55:13.963159 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:55:13.963186 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:13.944971 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:55:13.966763 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:55:13.998623 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:55:14.073145 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:55:14.083646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:55:14.183122 systemd-networkd[751]: lo: Link UP Nov 12 20:55:14.183137 systemd-networkd[751]: lo: Gained carrier Nov 12 20:55:14.188154 systemd-networkd[751]: Enumeration completed Nov 12 20:55:14.188313 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:55:14.212800 ignition[681]: Ignition 2.19.0 Nov 12 20:55:14.188866 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:14.212812 ignition[681]: Stage: fetch-offline Nov 12 20:55:14.188875 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:55:14.212859 ignition[681]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.191189 systemd-networkd[751]: eth0: Link UP Nov 12 20:55:14.212872 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.191198 systemd-networkd[751]: eth0: Gained carrier Nov 12 20:55:14.213005 ignition[681]: parsed url from cmdline: "" Nov 12 20:55:14.191213 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:14.213015 ignition[681]: no config URL provided Nov 12 20:55:14.202468 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.109/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 12 20:55:14.213025 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:55:14.207666 systemd[1]: Reached target network.target - Network. Nov 12 20:55:14.213037 ignition[681]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:55:14.216874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:55:14.213045 ignition[681]: failed to fetch config: resource requires networking Nov 12 20:55:14.230632 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:55:14.213486 ignition[681]: Ignition finished successfully Nov 12 20:55:14.289878 unknown[759]: fetched base config from "system" Nov 12 20:55:14.279838 ignition[759]: Ignition 2.19.0 Nov 12 20:55:14.289890 unknown[759]: fetched base config from "system" Nov 12 20:55:14.279857 ignition[759]: Stage: fetch Nov 12 20:55:14.289899 unknown[759]: fetched user config from "gcp" Nov 12 20:55:14.280046 ignition[759]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.292251 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:55:14.280060 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.309583 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:55:14.280202 ignition[759]: parsed url from cmdline: "" Nov 12 20:55:14.352232 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:55:14.280211 ignition[759]: no config URL provided Nov 12 20:55:14.381592 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:55:14.280219 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:55:14.416629 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:55:14.280232 ignition[759]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:55:14.420787 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:55:14.280256 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 12 20:55:14.446577 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:55:14.283949 ignition[759]: GET result: OK Nov 12 20:55:14.456640 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:55:14.284058 ignition[759]: parsing config with SHA512: 545ba5d33fd1fa66e8d9f7cb87512827d13c2e840d6c615d3cf11263f585d7167e9e940e03c86fba9ff4ea2551068e0af1acbb62ddb2e26a064b8ca28e284f07 Nov 12 20:55:14.474642 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:55:14.290540 ignition[759]: fetch: fetch complete Nov 12 20:55:14.491626 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:55:14.290549 ignition[759]: fetch: fetch passed Nov 12 20:55:14.513644 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:55:14.290616 ignition[759]: Ignition finished successfully Nov 12 20:55:14.349783 ignition[765]: Ignition 2.19.0 Nov 12 20:55:14.349790 ignition[765]: Stage: kargs Nov 12 20:55:14.349988 ignition[765]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.350000 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.351133 ignition[765]: kargs: kargs passed Nov 12 20:55:14.351183 ignition[765]: Ignition finished successfully Nov 12 20:55:14.414204 ignition[771]: Ignition 2.19.0 Nov 12 20:55:14.414213 ignition[771]: Stage: disks Nov 12 20:55:14.414442 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:14.414457 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:14.415499 ignition[771]: disks: disks passed Nov 12 20:55:14.415556 ignition[771]: Ignition finished successfully Nov 12 20:55:14.566269 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 12 20:55:14.767394 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:55:14.785501 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:55:14.904536 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:55:14.905425 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:55:14.906253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:55:14.936481 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:55:14.955487 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:55:14.979398 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Nov 12 20:55:14.979652 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:55:15.041530 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:15.041579 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:15.041604 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:55:15.041628 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:55:15.041644 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:55:14.979730 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:55:14.979773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:55:15.028477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:55:15.049736 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:55:15.074716 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:55:15.192818 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:55:15.203514 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:55:15.213492 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:55:15.223501 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:55:15.345746 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:55:15.350485 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:55:15.389411 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:15.397594 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:55:15.407644 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:55:15.451881 ignition[903]: INFO : Ignition 2.19.0 Nov 12 20:55:15.451881 ignition[903]: INFO : Stage: mount Nov 12 20:55:15.476630 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:15.476630 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:15.476630 ignition[903]: INFO : mount: mount passed Nov 12 20:55:15.476630 ignition[903]: INFO : Ignition finished successfully Nov 12 20:55:15.452773 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:55:15.469824 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:55:15.491507 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:55:15.607929 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (915) Nov 12 20:55:15.607964 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:55:15.607980 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:55:15.608005 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:55:15.608020 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:55:15.608035 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:55:15.515747 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:55:15.611450 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:55:15.663418 ignition[932]: INFO : Ignition 2.19.0 Nov 12 20:55:15.663418 ignition[932]: INFO : Stage: files Nov 12 20:55:15.677482 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:15.677482 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:15.677482 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:55:15.677482 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:55:15.677482 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:55:15.676335 unknown[932]: wrote ssh authorized keys file for user: core Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:55:15.775477 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:55:15.685529 systemd-networkd[751]: eth0: Gained IPv6LL Nov 12 20:55:17.890928 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 12 20:55:18.167556 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:55:18.167556 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.199539 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:55:18.448160 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 12 20:55:18.792059 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:55:18.792059 ignition[932]: INFO : files: op(c): [started] processing unit "containerd.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(c): [finished] processing unit "containerd.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:55:18.831520 ignition[932]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:55:18.831520 ignition[932]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:55:18.831520 ignition[932]: INFO : files: files passed Nov 12 20:55:18.831520 ignition[932]: INFO : Ignition finished successfully Nov 12 20:55:18.797329 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:55:18.826602 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:55:18.862565 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:55:18.877947 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:55:19.105502 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:19.105502 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:18.878063 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:55:19.160512 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:55:18.969128 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:55:18.970789 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:55:18.999654 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:55:19.078839 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:55:19.078963 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:55:19.098219 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:55:19.115593 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:55:19.129725 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:55:19.136628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:55:19.203402 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:55:19.222546 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:55:19.256481 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:19.269667 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:19.290680 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:55:19.308645 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:55:19.308826 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:55:19.342689 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:55:19.363710 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:55:19.382736 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:55:19.402632 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:55:19.423696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:55:19.442640 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:55:19.460706 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:55:19.482686 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:55:19.502709 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:55:19.520709 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:55:19.538651 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:55:19.538844 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:55:19.579653 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:19.600687 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:19.621678 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:55:19.621847 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:19.642626 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:55:19.642848 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:55:19.766588 ignition[984]: INFO : Ignition 2.19.0 Nov 12 20:55:19.766588 ignition[984]: INFO : Stage: umount Nov 12 20:55:19.766588 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:55:19.766588 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:55:19.766588 ignition[984]: INFO : umount: umount passed Nov 12 20:55:19.766588 ignition[984]: INFO : Ignition finished successfully Nov 12 20:55:19.667685 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:55:19.667890 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:55:19.688737 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:55:19.688914 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:55:19.714600 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:55:19.719635 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:55:19.719829 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:19.781623 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:55:19.791638 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:55:19.791822 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:19.853715 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:55:19.853888 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:55:19.887194 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:55:19.888221 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:55:19.888333 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:55:19.903008 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:55:19.903117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:55:19.924491 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:55:19.924661 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:55:19.946393 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:55:19.946451 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:55:19.955793 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:55:19.955855 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:55:19.972671 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:55:19.972725 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:55:19.989767 systemd[1]: Stopped target network.target - Network. Nov 12 20:55:20.007635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:55:20.007702 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:55:20.022673 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:55:20.040618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:55:20.044426 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:20.055617 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:55:20.081571 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:55:20.089705 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:55:20.089762 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:55:20.104680 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:55:20.104736 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:55:20.119658 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:55:20.119722 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:55:20.136668 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:55:20.136722 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:55:20.154684 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:55:20.154739 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:55:20.171896 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:55:20.176420 systemd-networkd[751]: eth0: DHCPv6 lease lost Nov 12 20:55:20.199647 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:55:20.218068 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:55:20.218225 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:55:20.236853 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:55:20.237254 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:55:20.245005 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:55:20.245071 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:20.265480 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:55:20.717468 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 12 20:55:20.277621 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:55:20.277687 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:55:20.304724 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:55:20.304784 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:20.322748 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:55:20.322819 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:20.350661 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:55:20.350735 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:20.374797 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:20.384114 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:55:20.384294 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:20.416804 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:55:20.416926 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:20.429533 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:55:20.429594 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:20.446604 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:55:20.446667 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:55:20.471755 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:55:20.471827 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:55:20.497751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:55:20.497838 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:55:20.533549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:55:20.547459 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:55:20.547558 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:20.558557 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:55:20.558636 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:20.570112 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:55:20.570260 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:55:20.589888 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:55:20.589998 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:55:20.611059 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:55:20.632563 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:55:20.671300 systemd[1]: Switching root. Nov 12 20:55:21.040442 systemd-journald[183]: Journal stopped Nov 12 20:55:23.251074 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:55:23.251129 kernel: SELinux: policy capability open_perms=1 Nov 12 20:55:23.251152 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:55:23.251168 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:55:23.251186 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:55:23.251204 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:55:23.251225 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:55:23.251249 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:55:23.251268 kernel: audit: type=1403 audit(1731444921.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:55:23.251290 systemd[1]: Successfully loaded SELinux policy in 90.818ms. Nov 12 20:55:23.251314 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.634ms. Nov 12 20:55:23.251335 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:55:23.251356 systemd[1]: Detected virtualization google. Nov 12 20:55:23.251396 systemd[1]: Detected architecture x86-64. Nov 12 20:55:23.251430 systemd[1]: Detected first boot. Nov 12 20:55:23.251452 systemd[1]: Initializing machine ID from random generator. Nov 12 20:55:23.251473 zram_generator::config[1042]: No configuration found. Nov 12 20:55:23.251495 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:55:23.251516 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:55:23.251542 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 12 20:55:23.251564 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:55:23.251585 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:55:23.251606 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:55:23.251626 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:55:23.251649 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:55:23.251672 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:55:23.251699 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:55:23.251722 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:55:23.251746 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:55:23.251769 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:55:23.251790 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:55:23.251812 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:55:23.251834 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:55:23.251857 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:55:23.251883 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:55:23.251916 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:55:23.251939 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:55:23.251959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:55:23.251981 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:55:23.252002 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:55:23.252032 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:55:23.252056 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:55:23.252080 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:55:23.252109 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:55:23.252133 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:55:23.252156 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:55:23.252180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:55:23.252204 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:55:23.252228 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:55:23.252255 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:55:23.252283 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:55:23.252307 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:55:23.252332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:23.252354 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:55:23.252413 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:55:23.252438 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:55:23.252462 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:55:23.252487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:55:23.252512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:55:23.252535 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:55:23.252558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:55:23.252580 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:55:23.252602 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:55:23.252630 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:55:23.252652 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:55:23.252673 kernel: fuse: init (API version 7.39) Nov 12 20:55:23.252693 kernel: ACPI: bus type drm_connector registered Nov 12 20:55:23.252715 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:55:23.252736 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 12 20:55:23.252756 kernel: loop: module loaded Nov 12 20:55:23.252776 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Nov 12 20:55:23.252805 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:55:23.252820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:55:23.252860 systemd-journald[1148]: Collecting audit messages is disabled. Nov 12 20:55:23.252888 systemd-journald[1148]: Journal started Nov 12 20:55:23.252925 systemd-journald[1148]: Runtime Journal (/run/log/journal/c185bb7034a544439195b3158dcf0561) is 8.0M, max 148.7M, 140.7M free. Nov 12 20:55:23.271398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:55:23.296403 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:55:23.327397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:55:23.353380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:23.363996 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:55:23.374907 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:55:23.384661 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:55:23.395784 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:55:23.405686 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:55:23.415765 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:55:23.426650 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:55:23.436899 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:55:23.448820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:55:23.460800 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:55:23.461051 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:55:23.472797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:55:23.473035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:55:23.484780 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:55:23.485016 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:55:23.495782 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:55:23.496017 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:55:23.507775 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:55:23.508025 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:55:23.517804 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:55:23.518078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:55:23.527916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:55:23.537819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:55:23.549830 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:55:23.561824 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:55:23.584245 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:55:23.606480 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:55:23.620474 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:55:23.630481 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:55:23.637566 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:55:23.655333 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:55:23.666518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:55:23.671631 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:55:23.683014 systemd-journald[1148]: Time spent on flushing to /var/log/journal/c185bb7034a544439195b3158dcf0561 is 84.313ms for 916 entries. Nov 12 20:55:23.683014 systemd-journald[1148]: System Journal (/var/log/journal/c185bb7034a544439195b3158dcf0561) is 8.0M, max 584.8M, 576.8M free. Nov 12 20:55:23.786180 systemd-journald[1148]: Received client request to flush runtime journal. Nov 12 20:55:23.681508 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:55:23.694837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:55:23.712547 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:55:23.732131 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:55:23.751992 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:55:23.763678 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:55:23.776030 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:55:23.788027 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:55:23.798622 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:55:23.813808 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Nov 12 20:55:23.813842 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Nov 12 20:55:23.818232 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:55:23.830341 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:55:23.851785 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:55:23.862755 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:55:23.914529 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:55:23.932645 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:55:23.975395 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Nov 12 20:55:23.975864 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Nov 12 20:55:23.983508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:55:24.483946 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:55:24.501603 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:55:24.547485 systemd-udevd[1210]: Using default interface naming scheme 'v255'. Nov 12 20:55:24.585812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:55:24.609589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:55:24.649593 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:55:24.676858 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Nov 12 20:55:24.769103 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1228) Nov 12 20:55:24.786998 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1228) Nov 12 20:55:24.827393 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1230) Nov 12 20:55:24.832231 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:55:24.930426 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 12 20:55:24.951387 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 12 20:55:25.009854 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:55:25.009891 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 12 20:55:25.009919 kernel: ACPI: button: Sleep Button [SLPF] Nov 12 20:55:25.009946 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:55:25.027422 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Nov 12 20:55:25.064203 systemd-networkd[1222]: lo: Link UP Nov 12 20:55:25.064220 systemd-networkd[1222]: lo: Gained carrier Nov 12 20:55:25.067539 systemd-networkd[1222]: Enumeration completed Nov 12 20:55:25.067716 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:55:25.068533 systemd-networkd[1222]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:25.068540 systemd-networkd[1222]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:55:25.069839 systemd-networkd[1222]: eth0: Link UP Nov 12 20:55:25.069882 systemd-networkd[1222]: eth0: Gained carrier Nov 12 20:55:25.069906 systemd-networkd[1222]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:25.076422 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:55:25.085488 systemd-networkd[1222]: eth0: DHCPv4 address 10.128.0.109/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 12 20:55:25.090986 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 12 20:55:25.107719 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:55:25.125737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:55:25.143035 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:55:25.162945 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:55:25.180504 lvm[1254]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:55:25.218514 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:55:25.221028 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:55:25.227251 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:55:25.239291 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:55:25.254529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:55:25.274576 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:55:25.285834 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:55:25.297484 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:55:25.297531 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:55:25.307510 systemd[1]: Reached target machines.target - Containers. Nov 12 20:55:25.316827 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:55:25.333577 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:55:25.351320 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:55:25.361648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:55:25.368542 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:55:25.385673 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:55:25.407638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:55:25.410256 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:55:25.428878 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:55:25.446211 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:55:25.448316 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:55:25.466390 kernel: loop0: detected capacity change from 0 to 211296 Nov 12 20:55:25.518543 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:55:25.551392 kernel: loop1: detected capacity change from 0 to 140768 Nov 12 20:55:25.620417 kernel: loop2: detected capacity change from 0 to 54824 Nov 12 20:55:25.677401 kernel: loop3: detected capacity change from 0 to 142488 Nov 12 20:55:25.753722 kernel: loop4: detected capacity change from 0 to 211296 Nov 12 20:55:25.787401 kernel: loop5: detected capacity change from 0 to 140768 Nov 12 20:55:25.827420 kernel: loop6: detected capacity change from 0 to 54824 Nov 12 20:55:25.852394 kernel: loop7: detected capacity change from 0 to 142488 Nov 12 20:55:25.880787 (sd-merge)[1283]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Nov 12 20:55:25.881797 (sd-merge)[1283]: Merged extensions into '/usr'. Nov 12 20:55:25.893547 systemd[1]: Reloading requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:55:25.893569 systemd[1]: Reloading... Nov 12 20:55:25.990453 zram_generator::config[1307]: No configuration found. Nov 12 20:55:26.103277 ldconfig[1265]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:55:26.117559 systemd-networkd[1222]: eth0: Gained IPv6LL Nov 12 20:55:26.191914 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:26.273921 systemd[1]: Reloading finished in 379 ms. Nov 12 20:55:26.297970 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:55:26.309957 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:55:26.319878 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:55:26.341582 systemd[1]: Starting ensure-sysext.service... Nov 12 20:55:26.358150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:55:26.374521 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:55:26.374547 systemd[1]: Reloading... Nov 12 20:55:26.402736 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:55:26.403455 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:55:26.405294 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:55:26.405927 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Nov 12 20:55:26.406063 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Nov 12 20:55:26.412523 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:55:26.412542 systemd-tmpfiles[1361]: Skipping /boot Nov 12 20:55:26.440851 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:55:26.442566 systemd-tmpfiles[1361]: Skipping /boot Nov 12 20:55:26.474442 zram_generator::config[1385]: No configuration found. Nov 12 20:55:26.652349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:26.736880 systemd[1]: Reloading finished in 361 ms. Nov 12 20:55:26.764171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:55:26.792745 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:26.813989 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:55:26.835721 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:55:26.847061 augenrules[1454]: No rules Nov 12 20:55:26.854953 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:55:26.873721 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:55:26.892084 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:26.903400 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:55:26.927039 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:55:26.950119 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:26.950592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:55:26.958736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:55:26.977728 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:55:26.993861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:55:27.015500 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:55:27.034842 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 12 20:55:27.044678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:55:27.045019 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:55:27.048115 systemd-resolved[1460]: Positive Trust Anchors: Nov 12 20:55:27.048151 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:55:27.048211 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:55:27.053576 systemd-resolved[1460]: Defaulting to hostname 'linux'. Nov 12 20:55:27.066419 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:55:27.076506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:55:27.079536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:55:27.090414 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:55:27.102167 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:55:27.102470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:55:27.114160 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:55:27.114455 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:55:27.125104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:55:27.125355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:55:27.137111 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:55:27.137407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:55:27.152172 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:55:27.165865 systemd[1]: Finished ensure-sysext.service. Nov 12 20:55:27.176133 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 12 20:55:27.192185 systemd[1]: Reached target network.target - Network. Nov 12 20:55:27.200498 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:55:27.210487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:55:27.233620 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 12 20:55:27.243512 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:55:27.243614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:55:27.243655 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:55:27.243703 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:55:27.253618 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:55:27.264598 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:55:27.275675 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:55:27.285629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:55:27.296476 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:55:27.307480 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:55:27.307620 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:55:27.316450 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:55:27.325058 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:55:27.336301 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:55:27.345536 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 12 20:55:27.357695 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:55:27.369459 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:55:27.379449 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:55:27.389457 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:55:27.397724 systemd[1]: System is tainted: cgroupsv1 Nov 12 20:55:27.397800 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:55:27.397840 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:55:27.409486 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:55:27.421225 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:55:27.439353 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:55:27.466178 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:55:27.488568 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:55:27.498480 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:55:27.500460 jq[1512]: false Nov 12 20:55:27.509922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:27.530576 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:55:27.549612 systemd[1]: Started ntpd.service - Network Time Service. Nov 12 20:55:27.551016 coreos-metadata[1510]: Nov 12 20:55:27.549 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 12 20:55:27.552878 coreos-metadata[1510]: Nov 12 20:55:27.551 INFO Fetch successful Nov 12 20:55:27.552878 coreos-metadata[1510]: Nov 12 20:55:27.551 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 12 20:55:27.553686 coreos-metadata[1510]: Nov 12 20:55:27.553 INFO Fetch successful Nov 12 20:55:27.553686 coreos-metadata[1510]: Nov 12 20:55:27.553 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 12 20:55:27.554530 coreos-metadata[1510]: Nov 12 20:55:27.554 INFO Fetch successful Nov 12 20:55:27.555505 coreos-metadata[1510]: Nov 12 20:55:27.555 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 12 20:55:27.559259 coreos-metadata[1510]: Nov 12 20:55:27.557 INFO Fetch successful Nov 12 20:55:27.559973 extend-filesystems[1515]: Found loop4 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found loop5 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found loop6 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found loop7 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda1 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda2 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda3 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found usr Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda4 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda6 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda7 Nov 12 20:55:27.576557 extend-filesystems[1515]: Found sda9 Nov 12 20:55:27.576557 extend-filesystems[1515]: Checking size of /dev/sda9 Nov 12 20:55:27.738745 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Nov 12 20:55:27.738813 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Nov 12 20:55:27.572572 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:55:27.739127 extend-filesystems[1515]: Resized partition /dev/sda9 Nov 12 20:55:27.580184 dbus-daemon[1511]: [system] SELinux support is enabled Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: ---------------------------------------------------- Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: corporation. Support and training for ntp-4 are Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: available at https://www.nwtime.org/support Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: ---------------------------------------------------- Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: proto: precision = 0.094 usec (-23) Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: basedate set to 2024-10-31 Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: gps base set to 2024-11-03 (week 2339) Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Listen normally on 3 eth0 10.128.0.109:123 Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Listen normally on 4 lo [::1]:123 Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:6d%2]:123 Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: Listening on routing socket on fd #22 for interface updates Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:55:27.747978 ntpd[1521]: 12 Nov 20:55:27 ntpd[1521]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:55:27.607017 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 12 20:55:27.755418 extend-filesystems[1534]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:55:27.755418 extend-filesystems[1534]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 12 20:55:27.755418 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 12 20:55:27.755418 extend-filesystems[1534]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Nov 12 20:55:27.840057 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1552) Nov 12 20:55:27.589518 dbus-daemon[1511]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1222 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 12 20:55:27.658491 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:55:27.841468 extend-filesystems[1515]: Resized filesystem in /dev/sda9 Nov 12 20:55:27.659991 ntpd[1521]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:55:27.850047 init.sh[1535]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 12 20:55:27.850047 init.sh[1535]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 12 20:55:27.850047 init.sh[1535]: + /usr/bin/google_instance_setup Nov 12 20:55:27.680948 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:55:27.660021 ntpd[1521]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:55:27.703614 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:55:27.660036 ntpd[1521]: ---------------------------------------------------- Nov 12 20:55:27.735865 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:55:27.660048 ntpd[1521]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:55:27.759203 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 12 20:55:27.660061 ntpd[1521]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:55:27.776567 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:55:27.660074 ntpd[1521]: corporation. Support and training for ntp-4 are Nov 12 20:55:27.799524 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:55:27.660088 ntpd[1521]: available at https://www.nwtime.org/support Nov 12 20:55:27.830065 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:55:27.660101 ntpd[1521]: ---------------------------------------------------- Nov 12 20:55:27.663826 ntpd[1521]: proto: precision = 0.094 usec (-23) Nov 12 20:55:27.664373 ntpd[1521]: basedate set to 2024-10-31 Nov 12 20:55:27.664398 ntpd[1521]: gps base set to 2024-11-03 (week 2339) Nov 12 20:55:27.669116 ntpd[1521]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:55:27.669176 ntpd[1521]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:55:27.669431 ntpd[1521]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:55:27.669488 ntpd[1521]: Listen normally on 3 eth0 10.128.0.109:123 Nov 12 20:55:27.669559 ntpd[1521]: Listen normally on 4 lo [::1]:123 Nov 12 20:55:27.669686 ntpd[1521]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:6d%2]:123 Nov 12 20:55:27.669743 ntpd[1521]: Listening on routing socket on fd #22 for interface updates Nov 12 20:55:27.671310 ntpd[1521]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:55:27.671345 ntpd[1521]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:55:27.877261 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:55:27.881561 jq[1568]: true Nov 12 20:55:27.880739 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:55:27.881242 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:55:27.883110 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:55:27.905322 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:55:27.905722 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:55:27.916270 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:55:27.930941 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:55:27.931387 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:55:27.934979 update_engine[1565]: I20241112 20:55:27.934880 1565 main.cc:92] Flatcar Update Engine starting Nov 12 20:55:27.949390 update_engine[1565]: I20241112 20:55:27.949080 1565 update_check_scheduler.cc:74] Next update check in 10m16s Nov 12 20:55:27.971904 (ntainerd)[1577]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:55:27.993949 jq[1575]: true Nov 12 20:55:28.011162 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:55:28.046502 dbus-daemon[1511]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 20:55:28.050751 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:55:28.050783 systemd-logind[1559]: Watching system buttons on /dev/input/event2 (Sleep Button) Nov 12 20:55:28.050813 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:55:28.062320 systemd-logind[1559]: New seat seat0. Nov 12 20:55:28.074571 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:55:28.107131 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:55:28.128387 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:55:28.129674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:55:28.129929 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:55:28.150868 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 12 20:55:28.155836 tar[1574]: linux-amd64/helm Nov 12 20:55:28.158521 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:55:28.158767 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:55:28.171439 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:55:28.191569 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:55:28.201418 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:55:28.216064 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:55:28.236706 systemd[1]: Starting sshkeys.service... Nov 12 20:55:28.319847 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:55:28.337807 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:55:28.442667 dbus-daemon[1511]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 12 20:55:28.442888 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 12 20:55:28.451178 dbus-daemon[1511]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1612 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 12 20:55:28.473470 systemd[1]: Starting polkit.service - Authorization Manager... Nov 12 20:55:28.500661 coreos-metadata[1621]: Nov 12 20:55:28.500 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 12 20:55:28.510597 coreos-metadata[1621]: Nov 12 20:55:28.509 INFO Fetch failed with 404: resource not found Nov 12 20:55:28.510597 coreos-metadata[1621]: Nov 12 20:55:28.510 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 12 20:55:28.511644 coreos-metadata[1621]: Nov 12 20:55:28.511 INFO Fetch successful Nov 12 20:55:28.511644 coreos-metadata[1621]: Nov 12 20:55:28.511 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 12 20:55:28.513599 coreos-metadata[1621]: Nov 12 20:55:28.512 INFO Fetch failed with 404: resource not found Nov 12 20:55:28.513599 coreos-metadata[1621]: Nov 12 20:55:28.512 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 12 20:55:28.516502 coreos-metadata[1621]: Nov 12 20:55:28.513 INFO Fetch failed with 404: resource not found Nov 12 20:55:28.516502 coreos-metadata[1621]: Nov 12 20:55:28.515 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 12 20:55:28.522016 coreos-metadata[1621]: Nov 12 20:55:28.516 INFO Fetch successful Nov 12 20:55:28.525932 unknown[1621]: wrote ssh authorized keys file for user: core Nov 12 20:55:28.596190 update-ssh-keys[1628]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:55:28.580796 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:55:28.604486 systemd[1]: Finished sshkeys.service. Nov 12 20:55:28.652980 polkitd[1624]: Started polkitd version 121 Nov 12 20:55:28.700509 polkitd[1624]: Loading rules from directory /etc/polkit-1/rules.d Nov 12 20:55:28.700759 polkitd[1624]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 12 20:55:28.704106 polkitd[1624]: Finished loading, compiling and executing 2 rules Nov 12 20:55:28.704798 dbus-daemon[1511]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 12 20:55:28.705045 systemd[1]: Started polkit.service - Authorization Manager. Nov 12 20:55:28.710051 polkitd[1624]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 12 20:55:28.797672 systemd-hostnamed[1612]: Hostname set to (transient) Nov 12 20:55:28.799337 systemd-resolved[1460]: System hostname changed to 'ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal'. Nov 12 20:55:28.802879 locksmithd[1617]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:55:29.162661 containerd[1577]: time="2024-11-12T20:55:29.162546458Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:55:29.328574 containerd[1577]: time="2024-11-12T20:55:29.328453343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:29.333249 containerd[1577]: time="2024-11-12T20:55:29.332077101Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:29.333776 containerd[1577]: time="2024-11-12T20:55:29.333399530Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:55:29.333776 containerd[1577]: time="2024-11-12T20:55:29.333441557Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:55:29.334136 containerd[1577]: time="2024-11-12T20:55:29.333957861Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:55:29.334136 containerd[1577]: time="2024-11-12T20:55:29.334000298Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:29.335972 containerd[1577]: time="2024-11-12T20:55:29.334317724Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:29.335972 containerd[1577]: time="2024-11-12T20:55:29.334350735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:29.335972 containerd[1577]: time="2024-11-12T20:55:29.335879403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:29.335972 containerd[1577]: time="2024-11-12T20:55:29.335914450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:29.336349 containerd[1577]: time="2024-11-12T20:55:29.336276612Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:29.336825 containerd[1577]: time="2024-11-12T20:55:29.336506768Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:29.336825 containerd[1577]: time="2024-11-12T20:55:29.336659681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:29.337391 containerd[1577]: time="2024-11-12T20:55:29.337168401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:55:29.338114 containerd[1577]: time="2024-11-12T20:55:29.338080176Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:55:29.338226 containerd[1577]: time="2024-11-12T20:55:29.338206988Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:55:29.338501 containerd[1577]: time="2024-11-12T20:55:29.338473877Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:55:29.338807 containerd[1577]: time="2024-11-12T20:55:29.338647824Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:55:29.349844 containerd[1577]: time="2024-11-12T20:55:29.348129330Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:55:29.349844 containerd[1577]: time="2024-11-12T20:55:29.348229960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:55:29.349844 containerd[1577]: time="2024-11-12T20:55:29.348326100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:55:29.349844 containerd[1577]: time="2024-11-12T20:55:29.348383381Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:55:29.349844 containerd[1577]: time="2024-11-12T20:55:29.348422305Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:55:29.351754 containerd[1577]: time="2024-11-12T20:55:29.350962934Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:55:29.354557 containerd[1577]: time="2024-11-12T20:55:29.354178493Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:55:29.355963 containerd[1577]: time="2024-11-12T20:55:29.355623661Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.359942211Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360021878Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360095829Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360127412Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360153050Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360200029Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360267124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360293907Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360336969Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360386825Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360422855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360464543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360488080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.361455 containerd[1577]: time="2024-11-12T20:55:29.360514482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360540086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360563974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360587539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360612456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360637827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360663476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360685286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360710074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360732956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360775725Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360835249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360859531Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360881198Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:55:29.362157 containerd[1577]: time="2024-11-12T20:55:29.360971658Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:55:29.362804 containerd[1577]: time="2024-11-12T20:55:29.361002780Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:55:29.362804 containerd[1577]: time="2024-11-12T20:55:29.361024587Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:55:29.362804 containerd[1577]: time="2024-11-12T20:55:29.361082995Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:55:29.362804 containerd[1577]: time="2024-11-12T20:55:29.361105247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.362804 containerd[1577]: time="2024-11-12T20:55:29.361129143Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:55:29.362804 containerd[1577]: time="2024-11-12T20:55:29.361148734Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:55:29.362804 containerd[1577]: time="2024-11-12T20:55:29.361170433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:55:29.369199 containerd[1577]: time="2024-11-12T20:55:29.365226162Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:55:29.369199 containerd[1577]: time="2024-11-12T20:55:29.365863909Z" level=info msg="Connect containerd service" Nov 12 20:55:29.369199 containerd[1577]: time="2024-11-12T20:55:29.365955307Z" level=info msg="using legacy CRI server" Nov 12 20:55:29.369199 containerd[1577]: time="2024-11-12T20:55:29.365972364Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:55:29.369199 containerd[1577]: time="2024-11-12T20:55:29.366205317Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.381659741Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382276248Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382384318Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382447104Z" level=info msg="Start subscribing containerd event" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382508497Z" level=info msg="Start recovering state" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382615894Z" level=info msg="Start event monitor" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382648187Z" level=info msg="Start snapshots syncer" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382666270Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382687379Z" level=info msg="Start streaming server" Nov 12 20:55:29.384785 containerd[1577]: time="2024-11-12T20:55:29.382783166Z" level=info msg="containerd successfully booted in 0.223325s" Nov 12 20:55:29.382970 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:55:29.507896 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:55:29.565583 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:55:29.583412 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:55:29.613743 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:55:29.614136 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:55:29.636138 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:55:29.674669 instance-setup[1547]: INFO Running google_set_multiqueue. Nov 12 20:55:29.684775 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:55:29.707874 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:55:29.726781 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:55:29.731874 instance-setup[1547]: INFO Set channels for eth0 to 2. Nov 12 20:55:29.737168 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:55:29.744643 instance-setup[1547]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 12 20:55:29.754487 instance-setup[1547]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 12 20:55:29.754556 instance-setup[1547]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 12 20:55:29.759542 instance-setup[1547]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 12 20:55:29.759622 instance-setup[1547]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 12 20:55:29.765456 instance-setup[1547]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 12 20:55:29.765518 instance-setup[1547]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 12 20:55:29.765700 instance-setup[1547]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 12 20:55:29.779452 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 12 20:55:29.785135 instance-setup[1547]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 12 20:55:29.787629 instance-setup[1547]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 12 20:55:29.787814 instance-setup[1547]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 12 20:55:29.812818 init.sh[1535]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 12 20:55:29.974212 tar[1574]: linux-amd64/LICENSE Nov 12 20:55:29.974212 tar[1574]: linux-amd64/README.md Nov 12 20:55:29.999356 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:55:30.025340 startup-script[1705]: INFO Starting startup scripts. Nov 12 20:55:30.032723 startup-script[1705]: INFO No startup scripts found in metadata. Nov 12 20:55:30.032781 startup-script[1705]: INFO Finished running startup scripts. Nov 12 20:55:30.054804 init.sh[1535]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 12 20:55:30.054804 init.sh[1535]: + daemon_pids=() Nov 12 20:55:30.054987 init.sh[1535]: + for d in accounts clock_skew network Nov 12 20:55:30.056180 init.sh[1535]: + daemon_pids+=($!) Nov 12 20:55:30.056180 init.sh[1535]: + for d in accounts clock_skew network Nov 12 20:55:30.056339 init.sh[1713]: + /usr/bin/google_accounts_daemon Nov 12 20:55:30.056864 init.sh[1535]: + daemon_pids+=($!) Nov 12 20:55:30.056864 init.sh[1535]: + for d in accounts clock_skew network Nov 12 20:55:30.056955 init.sh[1714]: + /usr/bin/google_clock_skew_daemon Nov 12 20:55:30.059925 init.sh[1535]: + daemon_pids+=($!) Nov 12 20:55:30.059925 init.sh[1535]: + NOTIFY_SOCKET=/run/systemd/notify Nov 12 20:55:30.059925 init.sh[1535]: + /usr/bin/systemd-notify --ready Nov 12 20:55:30.060130 init.sh[1715]: + /usr/bin/google_network_daemon Nov 12 20:55:30.080678 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 12 20:55:30.095556 init.sh[1535]: + wait -n 1713 1714 1715 Nov 12 20:55:30.429719 google-clock-skew[1714]: INFO Starting Google Clock Skew daemon. Nov 12 20:55:30.442084 google-clock-skew[1714]: INFO Clock drift token has changed: 0. Nov 12 20:55:30.456547 google-networking[1715]: INFO Starting Google Networking daemon. Nov 12 20:55:30.517761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:30.525143 groupadd[1725]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 12 20:55:30.528275 groupadd[1725]: group added to /etc/gshadow: name=google-sudoers Nov 12 20:55:30.531547 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:55:30.535138 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:30.541975 systemd[1]: Startup finished in 11.624s (kernel) + 9.239s (userspace) = 20.864s. Nov 12 20:55:30.592346 groupadd[1725]: new group: name=google-sudoers, GID=1000 Nov 12 20:55:30.621737 google-accounts[1713]: INFO Starting Google Accounts daemon. Nov 12 20:55:30.634541 google-accounts[1713]: WARNING OS Login not installed. Nov 12 20:55:30.636028 google-accounts[1713]: INFO Creating a new user account for 0. Nov 12 20:55:30.640868 init.sh[1746]: useradd: invalid user name '0': use --badname to ignore Nov 12 20:55:30.641356 google-accounts[1713]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 12 20:55:31.000072 systemd-resolved[1460]: Clock change detected. Flushing caches. Nov 12 20:55:31.001629 google-clock-skew[1714]: INFO Synced system time with hardware clock. Nov 12 20:55:31.094182 kubelet[1733]: E1112 20:55:31.094051 1733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:31.096228 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:31.096543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:35.580455 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:55:35.592759 systemd[1]: Started sshd@0-10.128.0.109:22-139.178.89.65:54864.service - OpenSSH per-connection server daemon (139.178.89.65:54864). Nov 12 20:55:35.887575 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 54864 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:55:35.891284 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:35.902284 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:55:35.914760 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:55:35.919570 systemd-logind[1559]: New session 1 of user core. Nov 12 20:55:35.934036 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:55:35.942839 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:55:35.964281 (systemd)[1762]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:55:36.077827 systemd[1762]: Queued start job for default target default.target. Nov 12 20:55:36.078461 systemd[1762]: Created slice app.slice - User Application Slice. Nov 12 20:55:36.078498 systemd[1762]: Reached target paths.target - Paths. Nov 12 20:55:36.078521 systemd[1762]: Reached target timers.target - Timers. Nov 12 20:55:36.084225 systemd[1762]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:55:36.094635 systemd[1762]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:55:36.094716 systemd[1762]: Reached target sockets.target - Sockets. Nov 12 20:55:36.094739 systemd[1762]: Reached target basic.target - Basic System. Nov 12 20:55:36.094796 systemd[1762]: Reached target default.target - Main User Target. Nov 12 20:55:36.094844 systemd[1762]: Startup finished in 123ms. Nov 12 20:55:36.095328 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:55:36.110885 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:55:36.336474 systemd[1]: Started sshd@1-10.128.0.109:22-139.178.89.65:54872.service - OpenSSH per-connection server daemon (139.178.89.65:54872). Nov 12 20:55:36.621027 sshd[1774]: Accepted publickey for core from 139.178.89.65 port 54872 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:55:36.622832 sshd[1774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:36.629233 systemd-logind[1559]: New session 2 of user core. Nov 12 20:55:36.636401 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:55:36.839571 sshd[1774]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:36.843765 systemd[1]: sshd@1-10.128.0.109:22-139.178.89.65:54872.service: Deactivated successfully. Nov 12 20:55:36.848305 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:55:36.849737 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:55:36.852670 systemd-logind[1559]: Removed session 2. Nov 12 20:55:36.886447 systemd[1]: Started sshd@2-10.128.0.109:22-139.178.89.65:54876.service - OpenSSH per-connection server daemon (139.178.89.65:54876). Nov 12 20:55:37.170602 sshd[1782]: Accepted publickey for core from 139.178.89.65 port 54876 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:55:37.172461 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:37.178649 systemd-logind[1559]: New session 3 of user core. Nov 12 20:55:37.189514 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:55:37.379465 sshd[1782]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:37.384871 systemd[1]: sshd@2-10.128.0.109:22-139.178.89.65:54876.service: Deactivated successfully. Nov 12 20:55:37.389528 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:55:37.389829 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:55:37.391736 systemd-logind[1559]: Removed session 3. Nov 12 20:55:37.434628 systemd[1]: Started sshd@3-10.128.0.109:22-139.178.89.65:54946.service - OpenSSH per-connection server daemon (139.178.89.65:54946). Nov 12 20:55:37.717842 sshd[1790]: Accepted publickey for core from 139.178.89.65 port 54946 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:55:37.719421 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:37.724679 systemd-logind[1559]: New session 4 of user core. Nov 12 20:55:37.734398 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:55:37.933892 sshd[1790]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:37.939775 systemd[1]: sshd@3-10.128.0.109:22-139.178.89.65:54946.service: Deactivated successfully. Nov 12 20:55:37.943599 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:55:37.944102 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:55:37.945884 systemd-logind[1559]: Removed session 4. Nov 12 20:55:37.981416 systemd[1]: Started sshd@4-10.128.0.109:22-139.178.89.65:54958.service - OpenSSH per-connection server daemon (139.178.89.65:54958). Nov 12 20:55:38.268149 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 54958 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:55:38.269769 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:38.276183 systemd-logind[1559]: New session 5 of user core. Nov 12 20:55:38.283545 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:55:38.460849 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:55:38.461345 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:38.481694 sudo[1802]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:38.525069 sshd[1798]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:38.531400 systemd[1]: sshd@4-10.128.0.109:22-139.178.89.65:54958.service: Deactivated successfully. Nov 12 20:55:38.535567 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:55:38.536007 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:55:38.537757 systemd-logind[1559]: Removed session 5. Nov 12 20:55:38.572785 systemd[1]: Started sshd@5-10.128.0.109:22-139.178.89.65:54974.service - OpenSSH per-connection server daemon (139.178.89.65:54974). Nov 12 20:55:38.857274 sshd[1807]: Accepted publickey for core from 139.178.89.65 port 54974 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:55:38.858786 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:38.865193 systemd-logind[1559]: New session 6 of user core. Nov 12 20:55:38.876410 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:55:39.035281 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:55:39.035772 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:39.040511 sudo[1812]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:39.053218 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:55:39.053676 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:39.070450 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:39.073180 auditctl[1815]: No rules Nov 12 20:55:39.073957 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:55:39.074362 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:39.084655 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:39.114654 augenrules[1834]: No rules Nov 12 20:55:39.116433 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:39.119016 sudo[1811]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:39.163709 sshd[1807]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:39.167793 systemd[1]: sshd@5-10.128.0.109:22-139.178.89.65:54974.service: Deactivated successfully. Nov 12 20:55:39.172897 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:55:39.173444 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:55:39.175225 systemd-logind[1559]: Removed session 6. Nov 12 20:55:39.212430 systemd[1]: Started sshd@6-10.128.0.109:22-139.178.89.65:54990.service - OpenSSH per-connection server daemon (139.178.89.65:54990). Nov 12 20:55:39.502273 sshd[1843]: Accepted publickey for core from 139.178.89.65 port 54990 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:55:39.504170 sshd[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:39.510432 systemd-logind[1559]: New session 7 of user core. Nov 12 20:55:39.517396 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:55:39.682518 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:55:39.682988 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:40.108467 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:55:40.111701 (dockerd)[1863]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:55:40.531412 dockerd[1863]: time="2024-11-12T20:55:40.531004638Z" level=info msg="Starting up" Nov 12 20:55:41.125636 systemd[1]: var-lib-docker-metacopy\x2dcheck1183781126-merged.mount: Deactivated successfully. Nov 12 20:55:41.127997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:55:41.135340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:41.164128 dockerd[1863]: time="2024-11-12T20:55:41.164067018Z" level=info msg="Loading containers: start." Nov 12 20:55:41.424378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:41.430838 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:41.478117 kernel: Initializing XFRM netlink socket Nov 12 20:55:41.551146 kubelet[1928]: E1112 20:55:41.551044 1928 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:41.558264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:41.560223 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:41.603245 systemd-networkd[1222]: docker0: Link UP Nov 12 20:55:41.619127 dockerd[1863]: time="2024-11-12T20:55:41.619068962Z" level=info msg="Loading containers: done." Nov 12 20:55:41.638465 dockerd[1863]: time="2024-11-12T20:55:41.638408010Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:55:41.638671 dockerd[1863]: time="2024-11-12T20:55:41.638538168Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:55:41.638734 dockerd[1863]: time="2024-11-12T20:55:41.638675535Z" level=info msg="Daemon has completed initialization" Nov 12 20:55:41.674845 dockerd[1863]: time="2024-11-12T20:55:41.674702072Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:55:41.675798 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:55:42.622511 containerd[1577]: time="2024-11-12T20:55:42.622407543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:55:43.116016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3297680956.mount: Deactivated successfully. Nov 12 20:55:45.323310 containerd[1577]: time="2024-11-12T20:55:45.323233750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:45.324961 containerd[1577]: time="2024-11-12T20:55:45.324881608Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35147427" Nov 12 20:55:45.326463 containerd[1577]: time="2024-11-12T20:55:45.326388269Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:45.330256 containerd[1577]: time="2024-11-12T20:55:45.330179006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:45.332310 containerd[1577]: time="2024-11-12T20:55:45.331752505Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.709290399s" Nov 12 20:55:45.332310 containerd[1577]: time="2024-11-12T20:55:45.331814234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:55:45.361786 containerd[1577]: time="2024-11-12T20:55:45.361741379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:55:47.377724 containerd[1577]: time="2024-11-12T20:55:47.377654224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:47.379317 containerd[1577]: time="2024-11-12T20:55:47.379248060Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32220233" Nov 12 20:55:47.380513 containerd[1577]: time="2024-11-12T20:55:47.380452674Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:47.384050 containerd[1577]: time="2024-11-12T20:55:47.383981064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:47.385618 containerd[1577]: time="2024-11-12T20:55:47.385420389Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 2.02362857s" Nov 12 20:55:47.385618 containerd[1577]: time="2024-11-12T20:55:47.385467666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:55:47.415032 containerd[1577]: time="2024-11-12T20:55:47.414976713Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:55:48.572338 containerd[1577]: time="2024-11-12T20:55:48.572268121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:48.573918 containerd[1577]: time="2024-11-12T20:55:48.573840209Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17334576" Nov 12 20:55:48.574957 containerd[1577]: time="2024-11-12T20:55:48.574897619Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:48.581113 containerd[1577]: time="2024-11-12T20:55:48.578836806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:48.583396 containerd[1577]: time="2024-11-12T20:55:48.583357169Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.16833327s" Nov 12 20:55:48.583562 containerd[1577]: time="2024-11-12T20:55:48.583539392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:55:48.612413 containerd[1577]: time="2024-11-12T20:55:48.612369133Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:55:49.877889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount551987761.mount: Deactivated successfully. Nov 12 20:55:50.409845 containerd[1577]: time="2024-11-12T20:55:50.409783357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.411054 containerd[1577]: time="2024-11-12T20:55:50.410987119Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28618711" Nov 12 20:55:50.412630 containerd[1577]: time="2024-11-12T20:55:50.412561945Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.415548 containerd[1577]: time="2024-11-12T20:55:50.415484052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.417105 containerd[1577]: time="2024-11-12T20:55:50.416389480Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 1.803964889s" Nov 12 20:55:50.417105 containerd[1577]: time="2024-11-12T20:55:50.416435857Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:55:50.446242 containerd[1577]: time="2024-11-12T20:55:50.446198016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:55:50.951595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774055244.mount: Deactivated successfully. Nov 12 20:55:51.680662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:55:51.690563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:51.959372 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:51.976141 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:52.076451 kubelet[2172]: E1112 20:55:52.076291 2172 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:52.081124 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:52.081585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:52.258635 containerd[1577]: time="2024-11-12T20:55:52.258461837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:52.260729 containerd[1577]: time="2024-11-12T20:55:52.260655493Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Nov 12 20:55:52.261697 containerd[1577]: time="2024-11-12T20:55:52.261615419Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:52.265650 containerd[1577]: time="2024-11-12T20:55:52.265587882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:52.272117 containerd[1577]: time="2024-11-12T20:55:52.271217534Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.824953147s" Nov 12 20:55:52.272117 containerd[1577]: time="2024-11-12T20:55:52.271281476Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:55:52.302341 containerd[1577]: time="2024-11-12T20:55:52.302280668Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:55:52.715745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584244653.mount: Deactivated successfully. Nov 12 20:55:52.721868 containerd[1577]: time="2024-11-12T20:55:52.721800127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:52.723117 containerd[1577]: time="2024-11-12T20:55:52.723039590Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Nov 12 20:55:52.724165 containerd[1577]: time="2024-11-12T20:55:52.724067610Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:52.727148 containerd[1577]: time="2024-11-12T20:55:52.727049718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:52.728577 containerd[1577]: time="2024-11-12T20:55:52.728135690Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 425.79666ms" Nov 12 20:55:52.728577 containerd[1577]: time="2024-11-12T20:55:52.728179929Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:55:52.757234 containerd[1577]: time="2024-11-12T20:55:52.757190783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:55:53.190016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207010597.mount: Deactivated successfully. Nov 12 20:55:55.483934 containerd[1577]: time="2024-11-12T20:55:55.483857675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:55.485750 containerd[1577]: time="2024-11-12T20:55:55.485671298Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115" Nov 12 20:55:55.487008 containerd[1577]: time="2024-11-12T20:55:55.486930665Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:55.490526 containerd[1577]: time="2024-11-12T20:55:55.490485335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:55.493388 containerd[1577]: time="2024-11-12T20:55:55.492140331Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.73490562s" Nov 12 20:55:55.493388 containerd[1577]: time="2024-11-12T20:55:55.492187040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:55:58.530721 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 12 20:56:00.484358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:00.491443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:00.532454 systemd[1]: Reloading requested from client PID 2313 ('systemctl') (unit session-7.scope)... Nov 12 20:56:00.532477 systemd[1]: Reloading... Nov 12 20:56:00.640116 zram_generator::config[2349]: No configuration found. Nov 12 20:56:00.828431 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:56:00.922029 systemd[1]: Reloading finished in 388 ms. Nov 12 20:56:00.983154 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:56:00.983308 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:56:00.983775 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:00.991548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:01.214315 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:01.214747 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:56:01.276681 kubelet[2416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:01.276681 kubelet[2416]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:56:01.276681 kubelet[2416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:01.278423 kubelet[2416]: I1112 20:56:01.278355 2416 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:56:01.747631 kubelet[2416]: I1112 20:56:01.747376 2416 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:56:01.747631 kubelet[2416]: I1112 20:56:01.747427 2416 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:56:01.747858 kubelet[2416]: I1112 20:56:01.747760 2416 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:56:01.775331 kubelet[2416]: I1112 20:56:01.775172 2416 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:56:01.776045 kubelet[2416]: E1112 20:56:01.775946 2416 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.794310 kubelet[2416]: I1112 20:56:01.794267 2416 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:56:01.794872 kubelet[2416]: I1112 20:56:01.794839 2416 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:56:01.795124 kubelet[2416]: I1112 20:56:01.795082 2416 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:56:01.795342 kubelet[2416]: I1112 20:56:01.795139 2416 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:56:01.795342 kubelet[2416]: I1112 20:56:01.795157 2416 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:56:01.795342 kubelet[2416]: I1112 20:56:01.795298 2416 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:01.795488 kubelet[2416]: I1112 20:56:01.795451 2416 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:56:01.795548 kubelet[2416]: I1112 20:56:01.795493 2416 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:56:01.795548 kubelet[2416]: I1112 20:56:01.795533 2416 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:56:01.795633 kubelet[2416]: I1112 20:56:01.795551 2416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:56:01.797858 kubelet[2416]: W1112 20:56:01.797554 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.797858 kubelet[2416]: E1112 20:56:01.797619 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.797858 kubelet[2416]: W1112 20:56:01.797703 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.797858 kubelet[2416]: E1112 20:56:01.797750 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.798514 kubelet[2416]: I1112 20:56:01.798490 2416 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:56:01.803136 kubelet[2416]: I1112 20:56:01.803114 2416 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:56:01.803303 kubelet[2416]: W1112 20:56:01.803267 2416 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:56:01.804007 kubelet[2416]: I1112 20:56:01.803982 2416 server.go:1256] "Started kubelet" Nov 12 20:56:01.805400 kubelet[2416]: I1112 20:56:01.805235 2416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:56:01.812617 kubelet[2416]: E1112 20:56:01.812419 2416 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal.18075404e876e686 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,UID:ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,},FirstTimestamp:2024-11-12 20:56:01.803953798 +0000 UTC m=+0.582202756,LastTimestamp:2024-11-12 20:56:01.803953798 +0000 UTC m=+0.582202756,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,}" Nov 12 20:56:01.815119 kubelet[2416]: I1112 20:56:01.813860 2416 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:56:01.815119 kubelet[2416]: I1112 20:56:01.814629 2416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:56:01.815119 kubelet[2416]: I1112 20:56:01.814971 2416 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:56:01.815119 kubelet[2416]: I1112 20:56:01.815072 2416 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:56:01.816449 kubelet[2416]: I1112 20:56:01.816428 2416 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:56:01.820470 kubelet[2416]: E1112 20:56:01.820438 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.109:6443: connect: connection refused" interval="200ms" Nov 12 20:56:01.820613 kubelet[2416]: I1112 20:56:01.820572 2416 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:56:01.820613 kubelet[2416]: I1112 20:56:01.820505 2416 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:56:01.821158 kubelet[2416]: I1112 20:56:01.821133 2416 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:56:01.824274 kubelet[2416]: I1112 20:56:01.824194 2416 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:56:01.824274 kubelet[2416]: I1112 20:56:01.824238 2416 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:56:01.837185 kubelet[2416]: W1112 20:56:01.835729 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.837185 kubelet[2416]: E1112 20:56:01.835799 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.837185 kubelet[2416]: E1112 20:56:01.835900 2416 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:56:01.844286 kubelet[2416]: I1112 20:56:01.844239 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:56:01.848173 kubelet[2416]: I1112 20:56:01.848151 2416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:56:01.848269 kubelet[2416]: I1112 20:56:01.848215 2416 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:56:01.848269 kubelet[2416]: I1112 20:56:01.848239 2416 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:56:01.848382 kubelet[2416]: E1112 20:56:01.848323 2416 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:56:01.854472 kubelet[2416]: W1112 20:56:01.854429 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.854645 kubelet[2416]: E1112 20:56:01.854626 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:01.878266 kubelet[2416]: I1112 20:56:01.878230 2416 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:56:01.878396 kubelet[2416]: I1112 20:56:01.878310 2416 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:56:01.878396 kubelet[2416]: I1112 20:56:01.878334 2416 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:01.880801 kubelet[2416]: I1112 20:56:01.880779 2416 policy_none.go:49] "None policy: Start" Nov 12 20:56:01.881701 kubelet[2416]: I1112 20:56:01.881679 2416 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:56:01.881790 kubelet[2416]: I1112 20:56:01.881765 2416 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:56:01.888130 kubelet[2416]: I1112 20:56:01.888105 2416 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:56:01.888510 kubelet[2416]: I1112 20:56:01.888438 2416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:56:01.892214 kubelet[2416]: E1112 20:56:01.892172 2416 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" not found" Nov 12 20:56:01.924938 kubelet[2416]: I1112 20:56:01.924896 2416 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:01.925369 kubelet[2416]: E1112 20:56:01.925331 2416 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.109:6443/api/v1/nodes\": dial tcp 10.128.0.109:6443: connect: connection refused" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:01.948524 kubelet[2416]: I1112 20:56:01.948465 2416 topology_manager.go:215] "Topology Admit Handler" podUID="6907202c4202f23bd42fb816b6175277" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:01.963159 kubelet[2416]: I1112 20:56:01.963012 2416 topology_manager.go:215] "Topology Admit Handler" podUID="ea5d752025284bc4499ca92e0b7832b2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:01.968443 kubelet[2416]: I1112 20:56:01.968417 2416 topology_manager.go:215] "Topology Admit Handler" podUID="b69f8ec08a79ef1c25f295f615584b48" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.021638 kubelet[2416]: E1112 20:56:02.021498 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.109:6443: connect: connection refused" interval="400ms" Nov 12 20:56:02.022745 kubelet[2416]: I1112 20:56:02.022707 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.023187 kubelet[2416]: I1112 20:56:02.023045 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.023187 kubelet[2416]: I1112 20:56:02.023151 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b69f8ec08a79ef1c25f295f615584b48-kubeconfig\") pod \"kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"b69f8ec08a79ef1c25f295f615584b48\") " pod="kube-system/kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.023663 kubelet[2416]: I1112 20:56:02.023467 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.023663 kubelet[2416]: I1112 20:56:02.023540 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6907202c4202f23bd42fb816b6175277-ca-certs\") pod \"kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"6907202c4202f23bd42fb816b6175277\") " pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.023663 kubelet[2416]: I1112 20:56:02.023638 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6907202c4202f23bd42fb816b6175277-k8s-certs\") pod \"kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"6907202c4202f23bd42fb816b6175277\") " pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.024190 kubelet[2416]: I1112 20:56:02.023932 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6907202c4202f23bd42fb816b6175277-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"6907202c4202f23bd42fb816b6175277\") " pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.024190 kubelet[2416]: I1112 20:56:02.024064 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-ca-certs\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.024190 kubelet[2416]: I1112 20:56:02.024149 2416 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.130302 kubelet[2416]: I1112 20:56:02.130252 2416 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.130759 kubelet[2416]: E1112 20:56:02.130714 2416 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.109:6443/api/v1/nodes\": dial tcp 10.128.0.109:6443: connect: connection refused" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.272585 containerd[1577]: time="2024-11-12T20:56:02.272435592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,Uid:6907202c4202f23bd42fb816b6175277,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:02.278507 containerd[1577]: time="2024-11-12T20:56:02.278439508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,Uid:ea5d752025284bc4499ca92e0b7832b2,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:02.286939 containerd[1577]: time="2024-11-12T20:56:02.286898462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,Uid:b69f8ec08a79ef1c25f295f615584b48,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:02.422642 kubelet[2416]: E1112 20:56:02.422606 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.109:6443: connect: connection refused" interval="800ms" Nov 12 20:56:02.537184 kubelet[2416]: I1112 20:56:02.536997 2416 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.537508 kubelet[2416]: E1112 20:56:02.537477 2416 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.109:6443/api/v1/nodes\": dial tcp 10.128.0.109:6443: connect: connection refused" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:02.676033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount653155940.mount: Deactivated successfully. Nov 12 20:56:02.684341 containerd[1577]: time="2024-11-12T20:56:02.684255943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:02.685693 containerd[1577]: time="2024-11-12T20:56:02.685622521Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:02.687053 containerd[1577]: time="2024-11-12T20:56:02.686962404Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:56:02.687943 containerd[1577]: time="2024-11-12T20:56:02.687886584Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Nov 12 20:56:02.689002 containerd[1577]: time="2024-11-12T20:56:02.688924201Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:02.691270 containerd[1577]: time="2024-11-12T20:56:02.691224961Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:02.691877 containerd[1577]: time="2024-11-12T20:56:02.691803422Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:56:02.694561 containerd[1577]: time="2024-11-12T20:56:02.694446758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:02.697472 containerd[1577]: time="2024-11-12T20:56:02.697137521Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 424.606435ms" Nov 12 20:56:02.699615 containerd[1577]: time="2024-11-12T20:56:02.699561239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 421.017816ms" Nov 12 20:56:02.710826 containerd[1577]: time="2024-11-12T20:56:02.710778513Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 423.400674ms" Nov 12 20:56:02.767112 kubelet[2416]: W1112 20:56:02.767012 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:02.767244 kubelet[2416]: E1112 20:56:02.767121 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:02.891761 containerd[1577]: time="2024-11-12T20:56:02.891148933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:02.891761 containerd[1577]: time="2024-11-12T20:56:02.891606491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:02.893547 containerd[1577]: time="2024-11-12T20:56:02.892074942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:02.895694 containerd[1577]: time="2024-11-12T20:56:02.895474694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:02.898636 containerd[1577]: time="2024-11-12T20:56:02.898284369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:02.898636 containerd[1577]: time="2024-11-12T20:56:02.898364122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:02.898636 containerd[1577]: time="2024-11-12T20:56:02.898414577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:02.898636 containerd[1577]: time="2024-11-12T20:56:02.898570903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:02.904113 containerd[1577]: time="2024-11-12T20:56:02.902865039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:02.904113 containerd[1577]: time="2024-11-12T20:56:02.902933629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:02.904113 containerd[1577]: time="2024-11-12T20:56:02.902959702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:02.904113 containerd[1577]: time="2024-11-12T20:56:02.903081902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:03.025593 containerd[1577]: time="2024-11-12T20:56:03.025544573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,Uid:b69f8ec08a79ef1c25f295f615584b48,Namespace:kube-system,Attempt:0,} returns sandbox id \"02cc53bc79cdd70351a36c8778c3d589b7ceeb872901b590a3917935277edff7\"" Nov 12 20:56:03.029346 kubelet[2416]: E1112 20:56:03.029315 2416 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-21291" Nov 12 20:56:03.032579 containerd[1577]: time="2024-11-12T20:56:03.032534975Z" level=info msg="CreateContainer within sandbox \"02cc53bc79cdd70351a36c8778c3d589b7ceeb872901b590a3917935277edff7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:56:03.049868 containerd[1577]: time="2024-11-12T20:56:03.049831663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,Uid:ea5d752025284bc4499ca92e0b7832b2,Namespace:kube-system,Attempt:0,} returns sandbox id \"890f7fa60501092f47c47d69ffe5a4013fde775b389ae187f0055b86b3239a34\"" Nov 12 20:56:03.050261 containerd[1577]: time="2024-11-12T20:56:03.050227729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal,Uid:6907202c4202f23bd42fb816b6175277,Namespace:kube-system,Attempt:0,} returns sandbox id \"20c293e2b1921e7956f76e0f7a0c8d100f92c40718592a53f852032986747f79\"" Nov 12 20:56:03.052235 kubelet[2416]: E1112 20:56:03.051999 2416 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-21291" Nov 12 20:56:03.052235 kubelet[2416]: E1112 20:56:03.052024 2416 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flat" Nov 12 20:56:03.054567 containerd[1577]: time="2024-11-12T20:56:03.054421439Z" level=info msg="CreateContainer within sandbox \"20c293e2b1921e7956f76e0f7a0c8d100f92c40718592a53f852032986747f79\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:56:03.059280 containerd[1577]: time="2024-11-12T20:56:03.059232216Z" level=info msg="CreateContainer within sandbox \"02cc53bc79cdd70351a36c8778c3d589b7ceeb872901b590a3917935277edff7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee6c3398be6d289ccb758697396feb3a5b2adb534661016252739cd438f30f7d\"" Nov 12 20:56:03.061642 kubelet[2416]: W1112 20:56:03.061586 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:03.061773 kubelet[2416]: E1112 20:56:03.061655 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.109:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:03.071439 containerd[1577]: time="2024-11-12T20:56:03.071392868Z" level=info msg="CreateContainer within sandbox \"20c293e2b1921e7956f76e0f7a0c8d100f92c40718592a53f852032986747f79\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d02581b4e2c1ed9e072d040e3a714a719f6555b0c8eec61fe08e23673fdd66f7\"" Nov 12 20:56:03.071945 containerd[1577]: time="2024-11-12T20:56:03.071918965Z" level=info msg="CreateContainer within sandbox \"890f7fa60501092f47c47d69ffe5a4013fde775b389ae187f0055b86b3239a34\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:56:03.072397 containerd[1577]: time="2024-11-12T20:56:03.072305402Z" level=info msg="StartContainer for \"d02581b4e2c1ed9e072d040e3a714a719f6555b0c8eec61fe08e23673fdd66f7\"" Nov 12 20:56:03.075079 kubelet[2416]: W1112 20:56:03.074985 2416 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:03.075079 kubelet[2416]: E1112 20:56:03.075055 2416 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.109:6443: connect: connection refused Nov 12 20:56:03.076860 containerd[1577]: time="2024-11-12T20:56:03.075434903Z" level=info msg="StartContainer for \"ee6c3398be6d289ccb758697396feb3a5b2adb534661016252739cd438f30f7d\"" Nov 12 20:56:03.108554 containerd[1577]: time="2024-11-12T20:56:03.108481081Z" level=info msg="CreateContainer within sandbox \"890f7fa60501092f47c47d69ffe5a4013fde775b389ae187f0055b86b3239a34\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"787911caae348fabf3fc63bb00a2731c78574616fb6421936b8ad35ee59c4608\"" Nov 12 20:56:03.117366 containerd[1577]: time="2024-11-12T20:56:03.116196454Z" level=info msg="StartContainer for \"787911caae348fabf3fc63bb00a2731c78574616fb6421936b8ad35ee59c4608\"" Nov 12 20:56:03.216771 containerd[1577]: time="2024-11-12T20:56:03.216723371Z" level=info msg="StartContainer for \"d02581b4e2c1ed9e072d040e3a714a719f6555b0c8eec61fe08e23673fdd66f7\" returns successfully" Nov 12 20:56:03.226116 kubelet[2416]: E1112 20:56:03.225844 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.109:6443: connect: connection refused" interval="1.6s" Nov 12 20:56:03.279112 containerd[1577]: time="2024-11-12T20:56:03.278560146Z" level=info msg="StartContainer for \"ee6c3398be6d289ccb758697396feb3a5b2adb534661016252739cd438f30f7d\" returns successfully" Nov 12 20:56:03.348642 kubelet[2416]: I1112 20:56:03.348610 2416 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:03.353135 containerd[1577]: time="2024-11-12T20:56:03.353069287Z" level=info msg="StartContainer for \"787911caae348fabf3fc63bb00a2731c78574616fb6421936b8ad35ee59c4608\" returns successfully" Nov 12 20:56:07.126509 kubelet[2416]: E1112 20:56:07.126441 2416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" not found" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:07.301079 kubelet[2416]: I1112 20:56:07.301036 2416 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:07.803016 kubelet[2416]: I1112 20:56:07.801674 2416 apiserver.go:52] "Watching apiserver" Nov 12 20:56:07.821029 kubelet[2416]: I1112 20:56:07.821000 2416 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:56:08.686211 kubelet[2416]: W1112 20:56:08.685811 2416 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:56:09.871319 systemd[1]: Reloading requested from client PID 2682 ('systemctl') (unit session-7.scope)... Nov 12 20:56:09.871341 systemd[1]: Reloading... Nov 12 20:56:09.975144 zram_generator::config[2718]: No configuration found. Nov 12 20:56:10.129161 kubelet[2416]: W1112 20:56:10.127754 2416 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:56:10.149944 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:56:10.258831 systemd[1]: Reloading finished in 386 ms. Nov 12 20:56:10.305913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:10.306512 kubelet[2416]: I1112 20:56:10.306142 2416 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:56:10.313797 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:56:10.314249 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:10.323836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:10.589316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:10.602829 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:56:10.693228 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:10.693924 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:56:10.693924 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:10.693924 kubelet[2780]: I1112 20:56:10.693355 2780 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:56:10.701082 kubelet[2780]: I1112 20:56:10.700064 2780 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:56:10.701082 kubelet[2780]: I1112 20:56:10.700114 2780 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:56:10.701082 kubelet[2780]: I1112 20:56:10.700401 2780 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:56:10.702928 kubelet[2780]: I1112 20:56:10.702867 2780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:56:10.706380 kubelet[2780]: I1112 20:56:10.706355 2780 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:56:10.717922 kubelet[2780]: I1112 20:56:10.717881 2780 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:56:10.719546 kubelet[2780]: I1112 20:56:10.718813 2780 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:56:10.719546 kubelet[2780]: I1112 20:56:10.719071 2780 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:56:10.719546 kubelet[2780]: I1112 20:56:10.719140 2780 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:56:10.719546 kubelet[2780]: I1112 20:56:10.719160 2780 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:56:10.719546 kubelet[2780]: I1112 20:56:10.719211 2780 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:10.719546 kubelet[2780]: I1112 20:56:10.719371 2780 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:56:10.720013 kubelet[2780]: I1112 20:56:10.719392 2780 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:56:10.720013 kubelet[2780]: I1112 20:56:10.719426 2780 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:56:10.720013 kubelet[2780]: I1112 20:56:10.719450 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:56:10.726247 kubelet[2780]: I1112 20:56:10.725268 2780 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:56:10.727350 kubelet[2780]: I1112 20:56:10.727319 2780 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:56:10.741753 kubelet[2780]: I1112 20:56:10.741683 2780 server.go:1256] "Started kubelet" Nov 12 20:56:10.746202 kubelet[2780]: I1112 20:56:10.744642 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:56:10.748494 kubelet[2780]: I1112 20:56:10.746548 2780 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:56:10.748494 kubelet[2780]: I1112 20:56:10.746634 2780 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:56:10.756391 kubelet[2780]: I1112 20:56:10.754641 2780 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:56:10.757013 kubelet[2780]: I1112 20:56:10.756987 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:56:10.779109 kubelet[2780]: I1112 20:56:10.777839 2780 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:56:10.784254 kubelet[2780]: I1112 20:56:10.784219 2780 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:56:10.784458 kubelet[2780]: I1112 20:56:10.784440 2780 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:56:10.787843 kubelet[2780]: I1112 20:56:10.787820 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:56:10.789743 kubelet[2780]: I1112 20:56:10.789722 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:56:10.789874 kubelet[2780]: I1112 20:56:10.789860 2780 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:56:10.789989 kubelet[2780]: I1112 20:56:10.789975 2780 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:56:10.790247 kubelet[2780]: E1112 20:56:10.790229 2780 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:56:10.791285 kubelet[2780]: I1112 20:56:10.789903 2780 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:56:10.791494 kubelet[2780]: I1112 20:56:10.791469 2780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:56:10.798313 kubelet[2780]: E1112 20:56:10.798284 2780 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:56:10.805359 kubelet[2780]: I1112 20:56:10.805337 2780 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:56:10.886378 kubelet[2780]: I1112 20:56:10.886259 2780 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:10.892736 kubelet[2780]: E1112 20:56:10.892689 2780 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:56:10.901079 kubelet[2780]: I1112 20:56:10.901056 2780 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:10.901299 kubelet[2780]: I1112 20:56:10.901285 2780 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:10.908481 kubelet[2780]: I1112 20:56:10.908339 2780 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:56:10.908481 kubelet[2780]: I1112 20:56:10.908381 2780 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:56:10.908481 kubelet[2780]: I1112 20:56:10.908405 2780 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:10.908741 kubelet[2780]: I1112 20:56:10.908619 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:56:10.908741 kubelet[2780]: I1112 20:56:10.908652 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:56:10.908741 kubelet[2780]: I1112 20:56:10.908666 2780 policy_none.go:49] "None policy: Start" Nov 12 20:56:10.910693 kubelet[2780]: I1112 20:56:10.910452 2780 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:56:10.910693 kubelet[2780]: I1112 20:56:10.910489 2780 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:56:10.912370 kubelet[2780]: I1112 20:56:10.911048 2780 state_mem.go:75] "Updated machine memory state" Nov 12 20:56:10.914685 kubelet[2780]: I1112 20:56:10.914060 2780 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:56:10.914685 kubelet[2780]: I1112 20:56:10.914351 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:56:11.093685 kubelet[2780]: I1112 20:56:11.093395 2780 topology_manager.go:215] "Topology Admit Handler" podUID="6907202c4202f23bd42fb816b6175277" podNamespace="kube-system" podName="kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.093685 kubelet[2780]: I1112 20:56:11.093515 2780 topology_manager.go:215] "Topology Admit Handler" podUID="ea5d752025284bc4499ca92e0b7832b2" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.093685 kubelet[2780]: I1112 20:56:11.093569 2780 topology_manager.go:215] "Topology Admit Handler" podUID="b69f8ec08a79ef1c25f295f615584b48" podNamespace="kube-system" podName="kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.105924 kubelet[2780]: W1112 20:56:11.105347 2780 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:56:11.105924 kubelet[2780]: W1112 20:56:11.105414 2780 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:56:11.105924 kubelet[2780]: E1112 20:56:11.105486 2780 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.106536 kubelet[2780]: W1112 20:56:11.106508 2780 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:56:11.106666 kubelet[2780]: E1112 20:56:11.106587 2780 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.187620 kubelet[2780]: I1112 20:56:11.187486 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6907202c4202f23bd42fb816b6175277-ca-certs\") pod \"kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"6907202c4202f23bd42fb816b6175277\") " pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188195 kubelet[2780]: I1112 20:56:11.187836 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-ca-certs\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188195 kubelet[2780]: I1112 20:56:11.187934 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188195 kubelet[2780]: I1112 20:56:11.188012 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b69f8ec08a79ef1c25f295f615584b48-kubeconfig\") pod \"kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"b69f8ec08a79ef1c25f295f615584b48\") " pod="kube-system/kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188195 kubelet[2780]: I1112 20:56:11.188052 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6907202c4202f23bd42fb816b6175277-k8s-certs\") pod \"kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"6907202c4202f23bd42fb816b6175277\") " pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188442 kubelet[2780]: I1112 20:56:11.188108 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6907202c4202f23bd42fb816b6175277-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"6907202c4202f23bd42fb816b6175277\") " pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188442 kubelet[2780]: I1112 20:56:11.188146 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188442 kubelet[2780]: I1112 20:56:11.188190 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.188442 kubelet[2780]: I1112 20:56:11.188239 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea5d752025284bc4499ca92e0b7832b2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" (UID: \"ea5d752025284bc4499ca92e0b7832b2\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:11.723214 kubelet[2780]: I1112 20:56:11.722911 2780 apiserver.go:52] "Watching apiserver" Nov 12 20:56:11.785484 kubelet[2780]: I1112 20:56:11.785399 2780 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:56:11.869111 kubelet[2780]: W1112 20:56:11.869064 2780 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:56:11.880363 kubelet[2780]: E1112 20:56:11.878175 2780 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:12.017574 kubelet[2780]: I1112 20:56:12.015813 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" podStartSLOduration=4.015752606 podStartE2EDuration="4.015752606s" podCreationTimestamp="2024-11-12 20:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:11.980972765 +0000 UTC m=+1.371653123" watchObservedRunningTime="2024-11-12 20:56:12.015752606 +0000 UTC m=+1.406432967" Nov 12 20:56:12.048897 kubelet[2780]: I1112 20:56:12.048858 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" podStartSLOduration=2.048806198 podStartE2EDuration="2.048806198s" podCreationTimestamp="2024-11-12 20:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:12.019300434 +0000 UTC m=+1.409980790" watchObservedRunningTime="2024-11-12 20:56:12.048806198 +0000 UTC m=+1.439486543" Nov 12 20:56:12.070934 kubelet[2780]: I1112 20:56:12.070891 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" podStartSLOduration=1.070824614 podStartE2EDuration="1.070824614s" podCreationTimestamp="2024-11-12 20:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:12.05098707 +0000 UTC m=+1.441667417" watchObservedRunningTime="2024-11-12 20:56:12.070824614 +0000 UTC m=+1.461504972" Nov 12 20:56:13.390120 update_engine[1565]: I20241112 20:56:13.389132 1565 update_attempter.cc:509] Updating boot flags... Nov 12 20:56:13.563815 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2832) Nov 12 20:56:13.848124 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2831) Nov 12 20:56:16.492490 sudo[1847]: pam_unix(sudo:session): session closed for user root Nov 12 20:56:16.536146 sshd[1843]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:16.543892 systemd[1]: sshd@6-10.128.0.109:22-139.178.89.65:54990.service: Deactivated successfully. Nov 12 20:56:16.548203 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:56:16.549249 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:56:16.550817 systemd-logind[1559]: Removed session 7. Nov 12 20:56:23.564267 kubelet[2780]: I1112 20:56:23.564227 2780 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:56:23.564961 containerd[1577]: time="2024-11-12T20:56:23.564771795Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:56:23.565475 kubelet[2780]: I1112 20:56:23.565131 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:56:24.461596 kubelet[2780]: I1112 20:56:24.461537 2780 topology_manager.go:215] "Topology Admit Handler" podUID="65c02bae-1250-4671-ae2d-88ed2bc0af81" podNamespace="kube-system" podName="kube-proxy-mvgzh" Nov 12 20:56:24.588420 kubelet[2780]: I1112 20:56:24.588378 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65c02bae-1250-4671-ae2d-88ed2bc0af81-lib-modules\") pod \"kube-proxy-mvgzh\" (UID: \"65c02bae-1250-4671-ae2d-88ed2bc0af81\") " pod="kube-system/kube-proxy-mvgzh" Nov 12 20:56:24.589152 kubelet[2780]: I1112 20:56:24.588443 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/65c02bae-1250-4671-ae2d-88ed2bc0af81-kube-proxy\") pod \"kube-proxy-mvgzh\" (UID: \"65c02bae-1250-4671-ae2d-88ed2bc0af81\") " pod="kube-system/kube-proxy-mvgzh" Nov 12 20:56:24.589152 kubelet[2780]: I1112 20:56:24.588479 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65c02bae-1250-4671-ae2d-88ed2bc0af81-xtables-lock\") pod \"kube-proxy-mvgzh\" (UID: \"65c02bae-1250-4671-ae2d-88ed2bc0af81\") " pod="kube-system/kube-proxy-mvgzh" Nov 12 20:56:24.589152 kubelet[2780]: I1112 20:56:24.588516 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjnf\" (UniqueName: \"kubernetes.io/projected/65c02bae-1250-4671-ae2d-88ed2bc0af81-kube-api-access-vsjnf\") pod \"kube-proxy-mvgzh\" (UID: \"65c02bae-1250-4671-ae2d-88ed2bc0af81\") " pod="kube-system/kube-proxy-mvgzh" Nov 12 20:56:24.698377 kubelet[2780]: I1112 20:56:24.698312 2780 topology_manager.go:215] "Topology Admit Handler" podUID="032115da-eef2-4f7f-a30c-d103d901442b" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-5tw7z" Nov 12 20:56:24.774767 containerd[1577]: time="2024-11-12T20:56:24.774584456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mvgzh,Uid:65c02bae-1250-4671-ae2d-88ed2bc0af81,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:24.814385 containerd[1577]: time="2024-11-12T20:56:24.814176206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:24.815259 containerd[1577]: time="2024-11-12T20:56:24.814995705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:24.815259 containerd[1577]: time="2024-11-12T20:56:24.815037941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:24.815541 containerd[1577]: time="2024-11-12T20:56:24.815254807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:24.866128 containerd[1577]: time="2024-11-12T20:56:24.865872892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mvgzh,Uid:65c02bae-1250-4671-ae2d-88ed2bc0af81,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9e7d2ced477732920759512a9a8c712c411d95b2f8c75696d9d64f6824ed74e\"" Nov 12 20:56:24.869417 containerd[1577]: time="2024-11-12T20:56:24.869378881Z" level=info msg="CreateContainer within sandbox \"b9e7d2ced477732920759512a9a8c712c411d95b2f8c75696d9d64f6824ed74e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:56:24.891817 containerd[1577]: time="2024-11-12T20:56:24.891736866Z" level=info msg="CreateContainer within sandbox \"b9e7d2ced477732920759512a9a8c712c411d95b2f8c75696d9d64f6824ed74e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9dd3322bd26ed4363727c145f79ee92776304b93f2a0d878e1ee2ce0a57d12b\"" Nov 12 20:56:24.893821 kubelet[2780]: I1112 20:56:24.892325 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fn97\" (UniqueName: \"kubernetes.io/projected/032115da-eef2-4f7f-a30c-d103d901442b-kube-api-access-2fn97\") pod \"tigera-operator-56b74f76df-5tw7z\" (UID: \"032115da-eef2-4f7f-a30c-d103d901442b\") " pod="tigera-operator/tigera-operator-56b74f76df-5tw7z" Nov 12 20:56:24.893821 kubelet[2780]: I1112 20:56:24.892396 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/032115da-eef2-4f7f-a30c-d103d901442b-var-lib-calico\") pod \"tigera-operator-56b74f76df-5tw7z\" (UID: \"032115da-eef2-4f7f-a30c-d103d901442b\") " pod="tigera-operator/tigera-operator-56b74f76df-5tw7z" Nov 12 20:56:24.894004 containerd[1577]: time="2024-11-12T20:56:24.892603942Z" level=info msg="StartContainer for \"e9dd3322bd26ed4363727c145f79ee92776304b93f2a0d878e1ee2ce0a57d12b\"" Nov 12 20:56:24.963428 containerd[1577]: time="2024-11-12T20:56:24.963380481Z" level=info msg="StartContainer for \"e9dd3322bd26ed4363727c145f79ee92776304b93f2a0d878e1ee2ce0a57d12b\" returns successfully" Nov 12 20:56:25.304748 containerd[1577]: time="2024-11-12T20:56:25.304324166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-5tw7z,Uid:032115da-eef2-4f7f-a30c-d103d901442b,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:56:25.338937 containerd[1577]: time="2024-11-12T20:56:25.338465487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:25.338937 containerd[1577]: time="2024-11-12T20:56:25.338563558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:25.338937 containerd[1577]: time="2024-11-12T20:56:25.338593573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:25.338937 containerd[1577]: time="2024-11-12T20:56:25.338784802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:25.417016 containerd[1577]: time="2024-11-12T20:56:25.416961199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-5tw7z,Uid:032115da-eef2-4f7f-a30c-d103d901442b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"67cd0404e3c5ec751321f5ad836822bc7f20a1482204d5a03296c0e060ec5250\"" Nov 12 20:56:25.419370 containerd[1577]: time="2024-11-12T20:56:25.419334917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:56:25.722813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472470348.mount: Deactivated successfully. Nov 12 20:56:25.898898 kubelet[2780]: I1112 20:56:25.898113 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-mvgzh" podStartSLOduration=1.898045685 podStartE2EDuration="1.898045685s" podCreationTimestamp="2024-11-12 20:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:25.897851061 +0000 UTC m=+15.288531418" watchObservedRunningTime="2024-11-12 20:56:25.898045685 +0000 UTC m=+15.288726042" Nov 12 20:56:27.796210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236229840.mount: Deactivated successfully. Nov 12 20:56:28.903371 containerd[1577]: time="2024-11-12T20:56:28.903303095Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:28.904562 containerd[1577]: time="2024-11-12T20:56:28.904379936Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763323" Nov 12 20:56:28.905782 containerd[1577]: time="2024-11-12T20:56:28.905711686Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:28.910602 containerd[1577]: time="2024-11-12T20:56:28.910533040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:28.911889 containerd[1577]: time="2024-11-12T20:56:28.911726221Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 3.492126678s" Nov 12 20:56:28.911889 containerd[1577]: time="2024-11-12T20:56:28.911772825Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:56:28.914793 containerd[1577]: time="2024-11-12T20:56:28.914602826Z" level=info msg="CreateContainer within sandbox \"67cd0404e3c5ec751321f5ad836822bc7f20a1482204d5a03296c0e060ec5250\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:56:28.932328 containerd[1577]: time="2024-11-12T20:56:28.932285072Z" level=info msg="CreateContainer within sandbox \"67cd0404e3c5ec751321f5ad836822bc7f20a1482204d5a03296c0e060ec5250\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"094b431ad05247bc0e01ae2f8c59089bfe7efea294f8fb349a95da1a65edbd42\"" Nov 12 20:56:28.933738 containerd[1577]: time="2024-11-12T20:56:28.933650467Z" level=info msg="StartContainer for \"094b431ad05247bc0e01ae2f8c59089bfe7efea294f8fb349a95da1a65edbd42\"" Nov 12 20:56:29.005550 containerd[1577]: time="2024-11-12T20:56:29.005496523Z" level=info msg="StartContainer for \"094b431ad05247bc0e01ae2f8c59089bfe7efea294f8fb349a95da1a65edbd42\" returns successfully" Nov 12 20:56:32.504190 kubelet[2780]: I1112 20:56:32.504138 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-5tw7z" podStartSLOduration=5.01010886 podStartE2EDuration="8.504062448s" podCreationTimestamp="2024-11-12 20:56:24 +0000 UTC" firstStartedPulling="2024-11-12 20:56:25.418340958 +0000 UTC m=+14.809021297" lastFinishedPulling="2024-11-12 20:56:28.912294537 +0000 UTC m=+18.302974885" observedRunningTime="2024-11-12 20:56:29.908989402 +0000 UTC m=+19.299669760" watchObservedRunningTime="2024-11-12 20:56:32.504062448 +0000 UTC m=+21.894742804" Nov 12 20:56:32.504877 kubelet[2780]: I1112 20:56:32.504307 2780 topology_manager.go:215] "Topology Admit Handler" podUID="157627a9-b7d8-4d9f-bf8e-1138a21c5815" podNamespace="calico-system" podName="calico-typha-fd47f9ffc-v7hfv" Nov 12 20:56:32.639710 kubelet[2780]: I1112 20:56:32.639217 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/157627a9-b7d8-4d9f-bf8e-1138a21c5815-typha-certs\") pod \"calico-typha-fd47f9ffc-v7hfv\" (UID: \"157627a9-b7d8-4d9f-bf8e-1138a21c5815\") " pod="calico-system/calico-typha-fd47f9ffc-v7hfv" Nov 12 20:56:32.639710 kubelet[2780]: I1112 20:56:32.639332 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/157627a9-b7d8-4d9f-bf8e-1138a21c5815-tigera-ca-bundle\") pod \"calico-typha-fd47f9ffc-v7hfv\" (UID: \"157627a9-b7d8-4d9f-bf8e-1138a21c5815\") " pod="calico-system/calico-typha-fd47f9ffc-v7hfv" Nov 12 20:56:32.639710 kubelet[2780]: I1112 20:56:32.639507 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvdzx\" (UniqueName: \"kubernetes.io/projected/157627a9-b7d8-4d9f-bf8e-1138a21c5815-kube-api-access-hvdzx\") pod \"calico-typha-fd47f9ffc-v7hfv\" (UID: \"157627a9-b7d8-4d9f-bf8e-1138a21c5815\") " pod="calico-system/calico-typha-fd47f9ffc-v7hfv" Nov 12 20:56:32.664705 kubelet[2780]: I1112 20:56:32.664164 2780 topology_manager.go:215] "Topology Admit Handler" podUID="22af5814-b71d-48e2-9335-d8da305bda79" podNamespace="calico-system" podName="calico-node-mzdpn" Nov 12 20:56:32.804105 kubelet[2780]: I1112 20:56:32.803967 2780 topology_manager.go:215] "Topology Admit Handler" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" podNamespace="calico-system" podName="csi-node-driver-54r88" Nov 12 20:56:32.806126 kubelet[2780]: E1112 20:56:32.804381 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:32.820874 containerd[1577]: time="2024-11-12T20:56:32.820824829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd47f9ffc-v7hfv,Uid:157627a9-b7d8-4d9f-bf8e-1138a21c5815,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:32.844125 kubelet[2780]: I1112 20:56:32.843574 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-var-run-calico\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.844125 kubelet[2780]: I1112 20:56:32.843640 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-cni-net-dir\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.844125 kubelet[2780]: I1112 20:56:32.843674 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/22af5814-b71d-48e2-9335-d8da305bda79-node-certs\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.844125 kubelet[2780]: I1112 20:56:32.843707 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-cni-log-dir\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.844125 kubelet[2780]: I1112 20:56:32.843752 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsdc5\" (UniqueName: \"kubernetes.io/projected/22af5814-b71d-48e2-9335-d8da305bda79-kube-api-access-hsdc5\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.844507 kubelet[2780]: I1112 20:56:32.843788 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-xtables-lock\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.844507 kubelet[2780]: I1112 20:56:32.843824 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-var-lib-calico\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.849978 kubelet[2780]: I1112 20:56:32.845395 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-policysync\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.849978 kubelet[2780]: I1112 20:56:32.846290 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1ebb665c-7489-46df-9cad-fdce94e5d49a-varrun\") pod \"csi-node-driver-54r88\" (UID: \"1ebb665c-7489-46df-9cad-fdce94e5d49a\") " pod="calico-system/csi-node-driver-54r88" Nov 12 20:56:32.849978 kubelet[2780]: I1112 20:56:32.846856 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22af5814-b71d-48e2-9335-d8da305bda79-tigera-ca-bundle\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.849978 kubelet[2780]: I1112 20:56:32.847258 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-cni-bin-dir\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.849978 kubelet[2780]: I1112 20:56:32.848415 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8hd9\" (UniqueName: \"kubernetes.io/projected/1ebb665c-7489-46df-9cad-fdce94e5d49a-kube-api-access-b8hd9\") pod \"csi-node-driver-54r88\" (UID: \"1ebb665c-7489-46df-9cad-fdce94e5d49a\") " pod="calico-system/csi-node-driver-54r88" Nov 12 20:56:32.851342 kubelet[2780]: I1112 20:56:32.848960 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-lib-modules\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.851342 kubelet[2780]: I1112 20:56:32.849011 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/22af5814-b71d-48e2-9335-d8da305bda79-flexvol-driver-host\") pod \"calico-node-mzdpn\" (UID: \"22af5814-b71d-48e2-9335-d8da305bda79\") " pod="calico-system/calico-node-mzdpn" Nov 12 20:56:32.851342 kubelet[2780]: I1112 20:56:32.849050 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ebb665c-7489-46df-9cad-fdce94e5d49a-registration-dir\") pod \"csi-node-driver-54r88\" (UID: \"1ebb665c-7489-46df-9cad-fdce94e5d49a\") " pod="calico-system/csi-node-driver-54r88" Nov 12 20:56:32.851342 kubelet[2780]: I1112 20:56:32.849126 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ebb665c-7489-46df-9cad-fdce94e5d49a-kubelet-dir\") pod \"csi-node-driver-54r88\" (UID: \"1ebb665c-7489-46df-9cad-fdce94e5d49a\") " pod="calico-system/csi-node-driver-54r88" Nov 12 20:56:32.851342 kubelet[2780]: I1112 20:56:32.849163 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ebb665c-7489-46df-9cad-fdce94e5d49a-socket-dir\") pod \"csi-node-driver-54r88\" (UID: \"1ebb665c-7489-46df-9cad-fdce94e5d49a\") " pod="calico-system/csi-node-driver-54r88" Nov 12 20:56:32.883940 containerd[1577]: time="2024-11-12T20:56:32.883831991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:32.884146 containerd[1577]: time="2024-11-12T20:56:32.883978548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:32.884146 containerd[1577]: time="2024-11-12T20:56:32.884022557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:32.884292 containerd[1577]: time="2024-11-12T20:56:32.884206693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:32.968639 kubelet[2780]: E1112 20:56:32.964665 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:32.968639 kubelet[2780]: W1112 20:56:32.965523 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:32.968639 kubelet[2780]: E1112 20:56:32.965559 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:32.968639 kubelet[2780]: E1112 20:56:32.967236 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:32.968639 kubelet[2780]: W1112 20:56:32.967251 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:32.968639 kubelet[2780]: E1112 20:56:32.967381 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:32.975107 kubelet[2780]: E1112 20:56:32.974026 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:32.975239 kubelet[2780]: W1112 20:56:32.975218 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:32.975378 kubelet[2780]: E1112 20:56:32.975360 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:32.984510 kubelet[2780]: E1112 20:56:32.984490 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:32.984642 kubelet[2780]: W1112 20:56:32.984624 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:32.984981 kubelet[2780]: E1112 20:56:32.984964 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:32.991797 kubelet[2780]: E1112 20:56:32.991647 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:32.991797 kubelet[2780]: W1112 20:56:32.991667 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:32.996538 kubelet[2780]: E1112 20:56:32.993766 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:32.996538 kubelet[2780]: E1112 20:56:32.995308 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:32.996538 kubelet[2780]: W1112 20:56:32.995324 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:32.996538 kubelet[2780]: E1112 20:56:32.995345 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:33.002302 kubelet[2780]: E1112 20:56:33.001518 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:33.002302 kubelet[2780]: W1112 20:56:33.001538 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:33.002302 kubelet[2780]: E1112 20:56:33.001671 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:33.008225 kubelet[2780]: E1112 20:56:33.008080 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:33.008225 kubelet[2780]: W1112 20:56:33.008127 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:33.008225 kubelet[2780]: E1112 20:56:33.008151 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:33.055474 containerd[1577]: time="2024-11-12T20:56:33.055302313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd47f9ffc-v7hfv,Uid:157627a9-b7d8-4d9f-bf8e-1138a21c5815,Namespace:calico-system,Attempt:0,} returns sandbox id \"44193be53a64a45f79d147d82ab0ff274914a41c2235078dbedc23fadc73ab5e\"" Nov 12 20:56:33.063374 containerd[1577]: time="2024-11-12T20:56:33.063144412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:56:33.279519 containerd[1577]: time="2024-11-12T20:56:33.278738168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mzdpn,Uid:22af5814-b71d-48e2-9335-d8da305bda79,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:33.313217 containerd[1577]: time="2024-11-12T20:56:33.312745802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:33.313217 containerd[1577]: time="2024-11-12T20:56:33.312857517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:33.313217 containerd[1577]: time="2024-11-12T20:56:33.312886243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:33.313217 containerd[1577]: time="2024-11-12T20:56:33.313030644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:33.362925 containerd[1577]: time="2024-11-12T20:56:33.362515353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mzdpn,Uid:22af5814-b71d-48e2-9335-d8da305bda79,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ff60ae9db3aae39be97d82a501737bc5eb3df22b4f41231229de913258c032c\"" Nov 12 20:56:33.763109 systemd[1]: run-containerd-runc-k8s.io-44193be53a64a45f79d147d82ab0ff274914a41c2235078dbedc23fadc73ab5e-runc.GY6VRR.mount: Deactivated successfully. Nov 12 20:56:34.791923 kubelet[2780]: E1112 20:56:34.791276 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:35.090898 containerd[1577]: time="2024-11-12T20:56:35.090716988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:35.092765 containerd[1577]: time="2024-11-12T20:56:35.092697379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:56:35.094133 containerd[1577]: time="2024-11-12T20:56:35.094013023Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:35.096976 containerd[1577]: time="2024-11-12T20:56:35.096933856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:35.098165 containerd[1577]: time="2024-11-12T20:56:35.097967078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.034537174s" Nov 12 20:56:35.098165 containerd[1577]: time="2024-11-12T20:56:35.098015546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:56:35.099699 containerd[1577]: time="2024-11-12T20:56:35.099262419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:56:35.122126 containerd[1577]: time="2024-11-12T20:56:35.122058142Z" level=info msg="CreateContainer within sandbox \"44193be53a64a45f79d147d82ab0ff274914a41c2235078dbedc23fadc73ab5e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:56:35.141310 containerd[1577]: time="2024-11-12T20:56:35.141269201Z" level=info msg="CreateContainer within sandbox \"44193be53a64a45f79d147d82ab0ff274914a41c2235078dbedc23fadc73ab5e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"71d5cda3d6d714ccfebac4b7edfd13a26109d833f87991fc69fd92d5877644a9\"" Nov 12 20:56:35.142838 containerd[1577]: time="2024-11-12T20:56:35.142261720Z" level=info msg="StartContainer for \"71d5cda3d6d714ccfebac4b7edfd13a26109d833f87991fc69fd92d5877644a9\"" Nov 12 20:56:35.233272 containerd[1577]: time="2024-11-12T20:56:35.233213184Z" level=info msg="StartContainer for \"71d5cda3d6d714ccfebac4b7edfd13a26109d833f87991fc69fd92d5877644a9\" returns successfully" Nov 12 20:56:35.947285 kubelet[2780]: I1112 20:56:35.947243 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-fd47f9ffc-v7hfv" podStartSLOduration=1.9103082260000002 podStartE2EDuration="3.94718511s" podCreationTimestamp="2024-11-12 20:56:32 +0000 UTC" firstStartedPulling="2024-11-12 20:56:33.061911783 +0000 UTC m=+22.452592120" lastFinishedPulling="2024-11-12 20:56:35.098788652 +0000 UTC m=+24.489469004" observedRunningTime="2024-11-12 20:56:35.946664811 +0000 UTC m=+25.337345175" watchObservedRunningTime="2024-11-12 20:56:35.94718511 +0000 UTC m=+25.337865466" Nov 12 20:56:35.967341 kubelet[2780]: E1112 20:56:35.967302 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.967341 kubelet[2780]: W1112 20:56:35.967334 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.967601 kubelet[2780]: E1112 20:56:35.967366 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.967767 kubelet[2780]: E1112 20:56:35.967740 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.967767 kubelet[2780]: W1112 20:56:35.967759 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.968016 kubelet[2780]: E1112 20:56:35.967785 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.968195 kubelet[2780]: E1112 20:56:35.968164 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.968195 kubelet[2780]: W1112 20:56:35.968184 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.968322 kubelet[2780]: E1112 20:56:35.968207 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.968559 kubelet[2780]: E1112 20:56:35.968537 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.968559 kubelet[2780]: W1112 20:56:35.968557 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.968723 kubelet[2780]: E1112 20:56:35.968579 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.968899 kubelet[2780]: E1112 20:56:35.968881 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.968899 kubelet[2780]: W1112 20:56:35.968897 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.969103 kubelet[2780]: E1112 20:56:35.968918 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.969248 kubelet[2780]: E1112 20:56:35.969234 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.969248 kubelet[2780]: W1112 20:56:35.969251 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.969248 kubelet[2780]: E1112 20:56:35.969271 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.969609 kubelet[2780]: E1112 20:56:35.969567 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.969609 kubelet[2780]: W1112 20:56:35.969584 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.969609 kubelet[2780]: E1112 20:56:35.969604 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.969906 kubelet[2780]: E1112 20:56:35.969887 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.969906 kubelet[2780]: W1112 20:56:35.969904 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.970050 kubelet[2780]: E1112 20:56:35.969923 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.970275 kubelet[2780]: E1112 20:56:35.970255 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.970275 kubelet[2780]: W1112 20:56:35.970272 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.970505 kubelet[2780]: E1112 20:56:35.970292 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.970597 kubelet[2780]: E1112 20:56:35.970562 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.970597 kubelet[2780]: W1112 20:56:35.970575 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.970597 kubelet[2780]: E1112 20:56:35.970594 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.970922 kubelet[2780]: E1112 20:56:35.970887 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.970922 kubelet[2780]: W1112 20:56:35.970901 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.970922 kubelet[2780]: E1112 20:56:35.970919 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.971260 kubelet[2780]: E1112 20:56:35.971242 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.971260 kubelet[2780]: W1112 20:56:35.971259 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.971419 kubelet[2780]: E1112 20:56:35.971280 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.971711 kubelet[2780]: E1112 20:56:35.971695 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.972053 kubelet[2780]: W1112 20:56:35.971860 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.972053 kubelet[2780]: E1112 20:56:35.971891 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.972682 kubelet[2780]: E1112 20:56:35.972437 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.972682 kubelet[2780]: W1112 20:56:35.972452 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.972682 kubelet[2780]: E1112 20:56:35.972471 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.972984 kubelet[2780]: E1112 20:56:35.972745 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.972984 kubelet[2780]: W1112 20:56:35.972759 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.972984 kubelet[2780]: E1112 20:56:35.972778 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.976848 kubelet[2780]: E1112 20:56:35.976829 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.977383 kubelet[2780]: W1112 20:56:35.976902 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.977383 kubelet[2780]: E1112 20:56:35.976929 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.977529 kubelet[2780]: E1112 20:56:35.977421 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.977529 kubelet[2780]: W1112 20:56:35.977437 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.977529 kubelet[2780]: E1112 20:56:35.977457 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.977966 kubelet[2780]: E1112 20:56:35.977946 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.977966 kubelet[2780]: W1112 20:56:35.977964 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.978134 kubelet[2780]: E1112 20:56:35.977991 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.978400 kubelet[2780]: E1112 20:56:35.978373 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.978400 kubelet[2780]: W1112 20:56:35.978391 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.978552 kubelet[2780]: E1112 20:56:35.978430 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.978776 kubelet[2780]: E1112 20:56:35.978759 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.978846 kubelet[2780]: W1112 20:56:35.978787 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.978846 kubelet[2780]: E1112 20:56:35.978827 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.979200 kubelet[2780]: E1112 20:56:35.979179 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.979200 kubelet[2780]: W1112 20:56:35.979197 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.979489 kubelet[2780]: E1112 20:56:35.979294 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.979583 kubelet[2780]: E1112 20:56:35.979565 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.979648 kubelet[2780]: W1112 20:56:35.979583 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.979718 kubelet[2780]: E1112 20:56:35.979701 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.979924 kubelet[2780]: E1112 20:56:35.979905 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.979924 kubelet[2780]: W1112 20:56:35.979922 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.980066 kubelet[2780]: E1112 20:56:35.980037 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.980292 kubelet[2780]: E1112 20:56:35.980267 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.980292 kubelet[2780]: W1112 20:56:35.980284 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.980480 kubelet[2780]: E1112 20:56:35.980310 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.980815 kubelet[2780]: E1112 20:56:35.980797 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.980815 kubelet[2780]: W1112 20:56:35.980813 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.981016 kubelet[2780]: E1112 20:56:35.980838 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.981182 kubelet[2780]: E1112 20:56:35.981165 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.981182 kubelet[2780]: W1112 20:56:35.981181 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.981315 kubelet[2780]: E1112 20:56:35.981217 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.981571 kubelet[2780]: E1112 20:56:35.981517 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.981571 kubelet[2780]: W1112 20:56:35.981560 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.981714 kubelet[2780]: E1112 20:56:35.981660 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.981958 kubelet[2780]: E1112 20:56:35.981938 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.981958 kubelet[2780]: W1112 20:56:35.981956 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.982132 kubelet[2780]: E1112 20:56:35.981983 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.982319 kubelet[2780]: E1112 20:56:35.982300 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.982319 kubelet[2780]: W1112 20:56:35.982317 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.982448 kubelet[2780]: E1112 20:56:35.982342 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.982679 kubelet[2780]: E1112 20:56:35.982658 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.982679 kubelet[2780]: W1112 20:56:35.982679 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.982799 kubelet[2780]: E1112 20:56:35.982703 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.983293 kubelet[2780]: E1112 20:56:35.983268 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.983293 kubelet[2780]: W1112 20:56:35.983284 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.983438 kubelet[2780]: E1112 20:56:35.983325 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.983825 kubelet[2780]: E1112 20:56:35.983802 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.983825 kubelet[2780]: W1112 20:56:35.983821 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.983971 kubelet[2780]: E1112 20:56:35.983850 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:35.984216 kubelet[2780]: E1112 20:56:35.984198 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:35.984216 kubelet[2780]: W1112 20:56:35.984216 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:35.984343 kubelet[2780]: E1112 20:56:35.984236 2780 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:36.272515 containerd[1577]: time="2024-11-12T20:56:36.272358736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:36.276114 containerd[1577]: time="2024-11-12T20:56:36.275022500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:56:36.281131 containerd[1577]: time="2024-11-12T20:56:36.277473636Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:36.282942 containerd[1577]: time="2024-11-12T20:56:36.282905667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:36.285216 containerd[1577]: time="2024-11-12T20:56:36.284726861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.185419812s" Nov 12 20:56:36.286079 containerd[1577]: time="2024-11-12T20:56:36.286012375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:56:36.289031 containerd[1577]: time="2024-11-12T20:56:36.288884954Z" level=info msg="CreateContainer within sandbox \"6ff60ae9db3aae39be97d82a501737bc5eb3df22b4f41231229de913258c032c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:56:36.313865 containerd[1577]: time="2024-11-12T20:56:36.313828441Z" level=info msg="CreateContainer within sandbox \"6ff60ae9db3aae39be97d82a501737bc5eb3df22b4f41231229de913258c032c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8fcf2abe614f5c98a9e141fec571b2a35ea989084694bbc09e45cc40d4d3a271\"" Nov 12 20:56:36.320156 containerd[1577]: time="2024-11-12T20:56:36.320120546Z" level=info msg="StartContainer for \"8fcf2abe614f5c98a9e141fec571b2a35ea989084694bbc09e45cc40d4d3a271\"" Nov 12 20:56:36.329644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1049796822.mount: Deactivated successfully. Nov 12 20:56:36.443174 containerd[1577]: time="2024-11-12T20:56:36.443126135Z" level=info msg="StartContainer for \"8fcf2abe614f5c98a9e141fec571b2a35ea989084694bbc09e45cc40d4d3a271\" returns successfully" Nov 12 20:56:36.791133 kubelet[2780]: E1112 20:56:36.790628 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:36.927546 kubelet[2780]: I1112 20:56:36.927489 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:37.110675 systemd[1]: run-containerd-runc-k8s.io-8fcf2abe614f5c98a9e141fec571b2a35ea989084694bbc09e45cc40d4d3a271-runc.4pcNww.mount: Deactivated successfully. Nov 12 20:56:37.111026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fcf2abe614f5c98a9e141fec571b2a35ea989084694bbc09e45cc40d4d3a271-rootfs.mount: Deactivated successfully. Nov 12 20:56:37.118304 containerd[1577]: time="2024-11-12T20:56:37.118123117Z" level=info msg="shim disconnected" id=8fcf2abe614f5c98a9e141fec571b2a35ea989084694bbc09e45cc40d4d3a271 namespace=k8s.io Nov 12 20:56:37.118304 containerd[1577]: time="2024-11-12T20:56:37.118236828Z" level=warning msg="cleaning up after shim disconnected" id=8fcf2abe614f5c98a9e141fec571b2a35ea989084694bbc09e45cc40d4d3a271 namespace=k8s.io Nov 12 20:56:37.118304 containerd[1577]: time="2024-11-12T20:56:37.118255587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:37.932069 containerd[1577]: time="2024-11-12T20:56:37.931763531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:56:38.790906 kubelet[2780]: E1112 20:56:38.790860 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:40.791276 kubelet[2780]: E1112 20:56:40.791240 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:41.816650 containerd[1577]: time="2024-11-12T20:56:41.816592081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:41.818175 containerd[1577]: time="2024-11-12T20:56:41.818114675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:56:41.819758 containerd[1577]: time="2024-11-12T20:56:41.819684332Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:41.824368 containerd[1577]: time="2024-11-12T20:56:41.824289735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:41.826278 containerd[1577]: time="2024-11-12T20:56:41.825404064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 3.893588472s" Nov 12 20:56:41.826278 containerd[1577]: time="2024-11-12T20:56:41.825451895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:56:41.828192 containerd[1577]: time="2024-11-12T20:56:41.828147313Z" level=info msg="CreateContainer within sandbox \"6ff60ae9db3aae39be97d82a501737bc5eb3df22b4f41231229de913258c032c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:56:41.853760 containerd[1577]: time="2024-11-12T20:56:41.853702260Z" level=info msg="CreateContainer within sandbox \"6ff60ae9db3aae39be97d82a501737bc5eb3df22b4f41231229de913258c032c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b52ba88c60a35dda8dcc6f51b60764fb097b1332b78b4b75017a8ca42f104545\"" Nov 12 20:56:41.855484 containerd[1577]: time="2024-11-12T20:56:41.854856040Z" level=info msg="StartContainer for \"b52ba88c60a35dda8dcc6f51b60764fb097b1332b78b4b75017a8ca42f104545\"" Nov 12 20:56:41.940275 containerd[1577]: time="2024-11-12T20:56:41.940007922Z" level=info msg="StartContainer for \"b52ba88c60a35dda8dcc6f51b60764fb097b1332b78b4b75017a8ca42f104545\" returns successfully" Nov 12 20:56:42.697902 kubelet[2780]: I1112 20:56:42.696183 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:42.791190 kubelet[2780]: E1112 20:56:42.791133 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:42.837783 containerd[1577]: time="2024-11-12T20:56:42.837706512Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:56:42.871606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b52ba88c60a35dda8dcc6f51b60764fb097b1332b78b4b75017a8ca42f104545-rootfs.mount: Deactivated successfully. Nov 12 20:56:42.879493 kubelet[2780]: I1112 20:56:42.878512 2780 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:56:42.910914 kubelet[2780]: I1112 20:56:42.908691 2780 topology_manager.go:215] "Topology Admit Handler" podUID="017708f0-c5c9-4372-bddb-a2a7a49fd2e0" podNamespace="kube-system" podName="coredns-76f75df574-r8k87" Nov 12 20:56:42.913955 kubelet[2780]: I1112 20:56:42.912830 2780 topology_manager.go:215] "Topology Admit Handler" podUID="1f661fac-f550-4093-a121-8425e9897475" podNamespace="calico-system" podName="calico-kube-controllers-54d5d9c55f-qv98l" Nov 12 20:56:42.925126 kubelet[2780]: I1112 20:56:42.924168 2780 topology_manager.go:215] "Topology Admit Handler" podUID="2176e7e1-d94c-479d-92e3-e9f80e8d0f4d" podNamespace="calico-apiserver" podName="calico-apiserver-76cbf9b5bf-pz72g" Nov 12 20:56:42.930609 kubelet[2780]: I1112 20:56:42.925825 2780 topology_manager.go:215] "Topology Admit Handler" podUID="df2bc72d-5575-407e-ae43-315c296f87af" podNamespace="kube-system" podName="coredns-76f75df574-s7pp7" Nov 12 20:56:42.930609 kubelet[2780]: I1112 20:56:42.927839 2780 topology_manager.go:215] "Topology Admit Handler" podUID="43370d02-66a8-4ab1-8864-281286226360" podNamespace="calico-apiserver" podName="calico-apiserver-76cbf9b5bf-sf428" Nov 12 20:56:43.030811 kubelet[2780]: I1112 20:56:43.030649 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/017708f0-c5c9-4372-bddb-a2a7a49fd2e0-config-volume\") pod \"coredns-76f75df574-r8k87\" (UID: \"017708f0-c5c9-4372-bddb-a2a7a49fd2e0\") " pod="kube-system/coredns-76f75df574-r8k87" Nov 12 20:56:43.030811 kubelet[2780]: I1112 20:56:43.030773 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2176e7e1-d94c-479d-92e3-e9f80e8d0f4d-calico-apiserver-certs\") pod \"calico-apiserver-76cbf9b5bf-pz72g\" (UID: \"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d\") " pod="calico-apiserver/calico-apiserver-76cbf9b5bf-pz72g" Nov 12 20:56:43.031063 kubelet[2780]: I1112 20:56:43.030837 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6md\" (UniqueName: \"kubernetes.io/projected/2176e7e1-d94c-479d-92e3-e9f80e8d0f4d-kube-api-access-gs6md\") pod \"calico-apiserver-76cbf9b5bf-pz72g\" (UID: \"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d\") " pod="calico-apiserver/calico-apiserver-76cbf9b5bf-pz72g" Nov 12 20:56:43.031063 kubelet[2780]: I1112 20:56:43.030912 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbm28\" (UniqueName: \"kubernetes.io/projected/43370d02-66a8-4ab1-8864-281286226360-kube-api-access-nbm28\") pod \"calico-apiserver-76cbf9b5bf-sf428\" (UID: \"43370d02-66a8-4ab1-8864-281286226360\") " pod="calico-apiserver/calico-apiserver-76cbf9b5bf-sf428" Nov 12 20:56:43.031063 kubelet[2780]: I1112 20:56:43.030951 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df2bc72d-5575-407e-ae43-315c296f87af-config-volume\") pod \"coredns-76f75df574-s7pp7\" (UID: \"df2bc72d-5575-407e-ae43-315c296f87af\") " pod="kube-system/coredns-76f75df574-s7pp7" Nov 12 20:56:43.031063 kubelet[2780]: I1112 20:56:43.030985 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6f9\" (UniqueName: \"kubernetes.io/projected/df2bc72d-5575-407e-ae43-315c296f87af-kube-api-access-9b6f9\") pod \"coredns-76f75df574-s7pp7\" (UID: \"df2bc72d-5575-407e-ae43-315c296f87af\") " pod="kube-system/coredns-76f75df574-s7pp7" Nov 12 20:56:43.031063 kubelet[2780]: I1112 20:56:43.031028 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8h4\" (UniqueName: \"kubernetes.io/projected/017708f0-c5c9-4372-bddb-a2a7a49fd2e0-kube-api-access-kl8h4\") pod \"coredns-76f75df574-r8k87\" (UID: \"017708f0-c5c9-4372-bddb-a2a7a49fd2e0\") " pod="kube-system/coredns-76f75df574-r8k87" Nov 12 20:56:43.031364 kubelet[2780]: I1112 20:56:43.031068 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f661fac-f550-4093-a121-8425e9897475-tigera-ca-bundle\") pod \"calico-kube-controllers-54d5d9c55f-qv98l\" (UID: \"1f661fac-f550-4093-a121-8425e9897475\") " pod="calico-system/calico-kube-controllers-54d5d9c55f-qv98l" Nov 12 20:56:43.031364 kubelet[2780]: I1112 20:56:43.031130 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43370d02-66a8-4ab1-8864-281286226360-calico-apiserver-certs\") pod \"calico-apiserver-76cbf9b5bf-sf428\" (UID: \"43370d02-66a8-4ab1-8864-281286226360\") " pod="calico-apiserver/calico-apiserver-76cbf9b5bf-sf428" Nov 12 20:56:43.031364 kubelet[2780]: I1112 20:56:43.031176 2780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp6rg\" (UniqueName: \"kubernetes.io/projected/1f661fac-f550-4093-a121-8425e9897475-kube-api-access-bp6rg\") pod \"calico-kube-controllers-54d5d9c55f-qv98l\" (UID: \"1f661fac-f550-4093-a121-8425e9897475\") " pod="calico-system/calico-kube-controllers-54d5d9c55f-qv98l" Nov 12 20:56:43.237478 containerd[1577]: time="2024-11-12T20:56:43.236962164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-pz72g,Uid:2176e7e1-d94c-479d-92e3-e9f80e8d0f4d,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:43.237478 containerd[1577]: time="2024-11-12T20:56:43.237278323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r8k87,Uid:017708f0-c5c9-4372-bddb-a2a7a49fd2e0,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:43.243385 containerd[1577]: time="2024-11-12T20:56:43.243327043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5d9c55f-qv98l,Uid:1f661fac-f550-4093-a121-8425e9897475,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:43.254200 containerd[1577]: time="2024-11-12T20:56:43.254136813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7pp7,Uid:df2bc72d-5575-407e-ae43-315c296f87af,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:43.255750 containerd[1577]: time="2024-11-12T20:56:43.255665537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-sf428,Uid:43370d02-66a8-4ab1-8864-281286226360,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:43.637039 containerd[1577]: time="2024-11-12T20:56:43.636972312Z" level=info msg="shim disconnected" id=b52ba88c60a35dda8dcc6f51b60764fb097b1332b78b4b75017a8ca42f104545 namespace=k8s.io Nov 12 20:56:43.637604 containerd[1577]: time="2024-11-12T20:56:43.637291645Z" level=warning msg="cleaning up after shim disconnected" id=b52ba88c60a35dda8dcc6f51b60764fb097b1332b78b4b75017a8ca42f104545 namespace=k8s.io Nov 12 20:56:43.637604 containerd[1577]: time="2024-11-12T20:56:43.637319463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:43.911282 containerd[1577]: time="2024-11-12T20:56:43.910994216Z" level=error msg="Failed to destroy network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.913861 containerd[1577]: time="2024-11-12T20:56:43.912162581Z" level=error msg="encountered an error cleaning up failed sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.913861 containerd[1577]: time="2024-11-12T20:56:43.912250898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-pz72g,Uid:2176e7e1-d94c-479d-92e3-e9f80e8d0f4d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.914136 kubelet[2780]: E1112 20:56:43.913310 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.914136 kubelet[2780]: E1112 20:56:43.913391 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-pz72g" Nov 12 20:56:43.914136 kubelet[2780]: E1112 20:56:43.913430 2780 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-pz72g" Nov 12 20:56:43.914726 kubelet[2780]: E1112 20:56:43.913529 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76cbf9b5bf-pz72g_calico-apiserver(2176e7e1-d94c-479d-92e3-e9f80e8d0f4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76cbf9b5bf-pz72g_calico-apiserver(2176e7e1-d94c-479d-92e3-e9f80e8d0f4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-pz72g" podUID="2176e7e1-d94c-479d-92e3-e9f80e8d0f4d" Nov 12 20:56:43.922550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa-shm.mount: Deactivated successfully. Nov 12 20:56:43.943780 containerd[1577]: time="2024-11-12T20:56:43.943441305Z" level=error msg="Failed to destroy network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.944359 containerd[1577]: time="2024-11-12T20:56:43.944154456Z" level=error msg="encountered an error cleaning up failed sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.944359 containerd[1577]: time="2024-11-12T20:56:43.944254119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r8k87,Uid:017708f0-c5c9-4372-bddb-a2a7a49fd2e0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.945355 kubelet[2780]: E1112 20:56:43.944816 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.945355 kubelet[2780]: E1112 20:56:43.944886 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-r8k87" Nov 12 20:56:43.945355 kubelet[2780]: E1112 20:56:43.944922 2780 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-r8k87" Nov 12 20:56:43.945595 kubelet[2780]: E1112 20:56:43.945002 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-r8k87_kube-system(017708f0-c5c9-4372-bddb-a2a7a49fd2e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-r8k87_kube-system(017708f0-c5c9-4372-bddb-a2a7a49fd2e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-r8k87" podUID="017708f0-c5c9-4372-bddb-a2a7a49fd2e0" Nov 12 20:56:43.956908 containerd[1577]: time="2024-11-12T20:56:43.956292035Z" level=error msg="Failed to destroy network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.956908 containerd[1577]: time="2024-11-12T20:56:43.956757995Z" level=error msg="encountered an error cleaning up failed sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.956908 containerd[1577]: time="2024-11-12T20:56:43.956826229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5d9c55f-qv98l,Uid:1f661fac-f550-4093-a121-8425e9897475,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.957511 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5-shm.mount: Deactivated successfully. Nov 12 20:56:43.958164 kubelet[2780]: E1112 20:56:43.957503 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.958164 kubelet[2780]: E1112 20:56:43.957563 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d5d9c55f-qv98l" Nov 12 20:56:43.958164 kubelet[2780]: E1112 20:56:43.957627 2780 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d5d9c55f-qv98l" Nov 12 20:56:43.958952 kubelet[2780]: E1112 20:56:43.957705 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54d5d9c55f-qv98l_calico-system(1f661fac-f550-4093-a121-8425e9897475)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54d5d9c55f-qv98l_calico-system(1f661fac-f550-4093-a121-8425e9897475)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d5d9c55f-qv98l" podUID="1f661fac-f550-4093-a121-8425e9897475" Nov 12 20:56:43.970165 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c-shm.mount: Deactivated successfully. Nov 12 20:56:43.977288 kubelet[2780]: I1112 20:56:43.976985 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:56:43.982779 containerd[1577]: time="2024-11-12T20:56:43.982596048Z" level=info msg="StopPodSandbox for \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\"" Nov 12 20:56:43.985363 containerd[1577]: time="2024-11-12T20:56:43.985318895Z" level=info msg="Ensure that sandbox 3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c in task-service has been cleanup successfully" Nov 12 20:56:43.985792 kubelet[2780]: I1112 20:56:43.985768 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:56:43.991470 containerd[1577]: time="2024-11-12T20:56:43.990827450Z" level=info msg="StopPodSandbox for \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\"" Nov 12 20:56:43.992361 containerd[1577]: time="2024-11-12T20:56:43.992319809Z" level=info msg="Ensure that sandbox dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5 in task-service has been cleanup successfully" Nov 12 20:56:43.994723 containerd[1577]: time="2024-11-12T20:56:43.994685127Z" level=error msg="Failed to destroy network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:43.995406 kubelet[2780]: I1112 20:56:43.995382 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:56:43.999496 containerd[1577]: time="2024-11-12T20:56:43.999455899Z" level=info msg="StopPodSandbox for \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\"" Nov 12 20:56:44.001053 containerd[1577]: time="2024-11-12T20:56:44.000513966Z" level=info msg="Ensure that sandbox 1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa in task-service has been cleanup successfully" Nov 12 20:56:44.002754 containerd[1577]: time="2024-11-12T20:56:44.002714390Z" level=error msg="encountered an error cleaning up failed sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.002992 containerd[1577]: time="2024-11-12T20:56:44.002890698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7pp7,Uid:df2bc72d-5575-407e-ae43-315c296f87af,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.004286 kubelet[2780]: E1112 20:56:44.004226 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.004986 kubelet[2780]: E1112 20:56:44.004584 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s7pp7" Nov 12 20:56:44.004986 kubelet[2780]: E1112 20:56:44.004632 2780 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-s7pp7" Nov 12 20:56:44.004986 kubelet[2780]: E1112 20:56:44.004699 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-s7pp7_kube-system(df2bc72d-5575-407e-ae43-315c296f87af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-s7pp7_kube-system(df2bc72d-5575-407e-ae43-315c296f87af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s7pp7" podUID="df2bc72d-5575-407e-ae43-315c296f87af" Nov 12 20:56:44.004626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8-shm.mount: Deactivated successfully. Nov 12 20:56:44.019172 containerd[1577]: time="2024-11-12T20:56:44.017982922Z" level=error msg="Failed to destroy network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.022870 containerd[1577]: time="2024-11-12T20:56:44.021758745Z" level=error msg="encountered an error cleaning up failed sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.031921 containerd[1577]: time="2024-11-12T20:56:44.027556227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:56:44.034303 containerd[1577]: time="2024-11-12T20:56:44.033323907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-sf428,Uid:43370d02-66a8-4ab1-8864-281286226360,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.039138 kubelet[2780]: E1112 20:56:44.038434 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.039138 kubelet[2780]: E1112 20:56:44.038813 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-sf428" Nov 12 20:56:44.039138 kubelet[2780]: E1112 20:56:44.038851 2780 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-sf428" Nov 12 20:56:44.040583 kubelet[2780]: E1112 20:56:44.040423 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76cbf9b5bf-sf428_calico-apiserver(43370d02-66a8-4ab1-8864-281286226360)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76cbf9b5bf-sf428_calico-apiserver(43370d02-66a8-4ab1-8864-281286226360)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-sf428" podUID="43370d02-66a8-4ab1-8864-281286226360" Nov 12 20:56:44.085881 containerd[1577]: time="2024-11-12T20:56:44.085441908Z" level=error msg="StopPodSandbox for \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\" failed" error="failed to destroy network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.086217 kubelet[2780]: E1112 20:56:44.086157 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:56:44.086484 kubelet[2780]: E1112 20:56:44.086262 2780 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa"} Nov 12 20:56:44.086484 kubelet[2780]: E1112 20:56:44.086338 2780 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:44.086484 kubelet[2780]: E1112 20:56:44.086416 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-pz72g" podUID="2176e7e1-d94c-479d-92e3-e9f80e8d0f4d" Nov 12 20:56:44.097353 containerd[1577]: time="2024-11-12T20:56:44.097293451Z" level=error msg="StopPodSandbox for \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\" failed" error="failed to destroy network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.097684 kubelet[2780]: E1112 20:56:44.097655 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:56:44.098178 kubelet[2780]: E1112 20:56:44.097714 2780 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5"} Nov 12 20:56:44.098178 kubelet[2780]: E1112 20:56:44.097770 2780 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"017708f0-c5c9-4372-bddb-a2a7a49fd2e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:44.098178 kubelet[2780]: E1112 20:56:44.097875 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"017708f0-c5c9-4372-bddb-a2a7a49fd2e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-r8k87" podUID="017708f0-c5c9-4372-bddb-a2a7a49fd2e0" Nov 12 20:56:44.102788 containerd[1577]: time="2024-11-12T20:56:44.102555201Z" level=error msg="StopPodSandbox for \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\" failed" error="failed to destroy network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.103262 kubelet[2780]: E1112 20:56:44.103053 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:56:44.103262 kubelet[2780]: E1112 20:56:44.103119 2780 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c"} Nov 12 20:56:44.103262 kubelet[2780]: E1112 20:56:44.103179 2780 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f661fac-f550-4093-a121-8425e9897475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:44.103262 kubelet[2780]: E1112 20:56:44.103224 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f661fac-f550-4093-a121-8425e9897475\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d5d9c55f-qv98l" podUID="1f661fac-f550-4093-a121-8425e9897475" Nov 12 20:56:44.798290 containerd[1577]: time="2024-11-12T20:56:44.798160859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r88,Uid:1ebb665c-7489-46df-9cad-fdce94e5d49a,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:44.880075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c-shm.mount: Deactivated successfully. Nov 12 20:56:44.934057 containerd[1577]: time="2024-11-12T20:56:44.933997325Z" level=error msg="Failed to destroy network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.935330 containerd[1577]: time="2024-11-12T20:56:44.935282624Z" level=error msg="encountered an error cleaning up failed sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.935721 containerd[1577]: time="2024-11-12T20:56:44.935664825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r88,Uid:1ebb665c-7489-46df-9cad-fdce94e5d49a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.937543 kubelet[2780]: E1112 20:56:44.936285 2780 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:44.937543 kubelet[2780]: E1112 20:56:44.936353 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54r88" Nov 12 20:56:44.937543 kubelet[2780]: E1112 20:56:44.936386 2780 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54r88" Nov 12 20:56:44.938160 kubelet[2780]: E1112 20:56:44.936466 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-54r88_calico-system(1ebb665c-7489-46df-9cad-fdce94e5d49a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-54r88_calico-system(1ebb665c-7489-46df-9cad-fdce94e5d49a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:44.945001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0-shm.mount: Deactivated successfully. Nov 12 20:56:45.017386 kubelet[2780]: I1112 20:56:45.017339 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:56:45.018861 containerd[1577]: time="2024-11-12T20:56:45.018047428Z" level=info msg="StopPodSandbox for \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\"" Nov 12 20:56:45.018861 containerd[1577]: time="2024-11-12T20:56:45.018325611Z" level=info msg="Ensure that sandbox 8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c in task-service has been cleanup successfully" Nov 12 20:56:45.020699 kubelet[2780]: I1112 20:56:45.020138 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:56:45.021186 containerd[1577]: time="2024-11-12T20:56:45.021028769Z" level=info msg="StopPodSandbox for \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\"" Nov 12 20:56:45.022448 containerd[1577]: time="2024-11-12T20:56:45.022403128Z" level=info msg="Ensure that sandbox 8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8 in task-service has been cleanup successfully" Nov 12 20:56:45.023389 kubelet[2780]: I1112 20:56:45.023289 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:56:45.025201 containerd[1577]: time="2024-11-12T20:56:45.025120797Z" level=info msg="StopPodSandbox for \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\"" Nov 12 20:56:45.025685 containerd[1577]: time="2024-11-12T20:56:45.025620878Z" level=info msg="Ensure that sandbox 9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0 in task-service has been cleanup successfully" Nov 12 20:56:45.093693 containerd[1577]: time="2024-11-12T20:56:45.093519103Z" level=error msg="StopPodSandbox for \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\" failed" error="failed to destroy network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:45.094489 kubelet[2780]: E1112 20:56:45.094313 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:56:45.094489 kubelet[2780]: E1112 20:56:45.094376 2780 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0"} Nov 12 20:56:45.095435 kubelet[2780]: E1112 20:56:45.095182 2780 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ebb665c-7489-46df-9cad-fdce94e5d49a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:45.095435 kubelet[2780]: E1112 20:56:45.095251 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ebb665c-7489-46df-9cad-fdce94e5d49a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54r88" podUID="1ebb665c-7489-46df-9cad-fdce94e5d49a" Nov 12 20:56:45.096587 containerd[1577]: time="2024-11-12T20:56:45.096355011Z" level=error msg="StopPodSandbox for \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\" failed" error="failed to destroy network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:45.097030 kubelet[2780]: E1112 20:56:45.096901 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:56:45.097030 kubelet[2780]: E1112 20:56:45.096964 2780 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c"} Nov 12 20:56:45.097478 kubelet[2780]: E1112 20:56:45.097199 2780 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43370d02-66a8-4ab1-8864-281286226360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:45.097478 kubelet[2780]: E1112 20:56:45.097452 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43370d02-66a8-4ab1-8864-281286226360\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-sf428" podUID="43370d02-66a8-4ab1-8864-281286226360" Nov 12 20:56:45.105828 containerd[1577]: time="2024-11-12T20:56:45.105770375Z" level=error msg="StopPodSandbox for \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\" failed" error="failed to destroy network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:45.106126 kubelet[2780]: E1112 20:56:45.106081 2780 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:56:45.106247 kubelet[2780]: E1112 20:56:45.106150 2780 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8"} Nov 12 20:56:45.106247 kubelet[2780]: E1112 20:56:45.106212 2780 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df2bc72d-5575-407e-ae43-315c296f87af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:45.106407 kubelet[2780]: E1112 20:56:45.106262 2780 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df2bc72d-5575-407e-ae43-315c296f87af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-s7pp7" podUID="df2bc72d-5575-407e-ae43-315c296f87af" Nov 12 20:56:50.206616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169796558.mount: Deactivated successfully. Nov 12 20:56:50.236521 containerd[1577]: time="2024-11-12T20:56:50.236454325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:50.237846 containerd[1577]: time="2024-11-12T20:56:50.237775319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:56:50.239277 containerd[1577]: time="2024-11-12T20:56:50.239191578Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:50.241945 containerd[1577]: time="2024-11-12T20:56:50.241883480Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:50.243800 containerd[1577]: time="2024-11-12T20:56:50.242865742Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 6.210978767s" Nov 12 20:56:50.243800 containerd[1577]: time="2024-11-12T20:56:50.242911471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:56:50.268995 containerd[1577]: time="2024-11-12T20:56:50.268940711Z" level=info msg="CreateContainer within sandbox \"6ff60ae9db3aae39be97d82a501737bc5eb3df22b4f41231229de913258c032c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:56:50.291225 containerd[1577]: time="2024-11-12T20:56:50.291172197Z" level=info msg="CreateContainer within sandbox \"6ff60ae9db3aae39be97d82a501737bc5eb3df22b4f41231229de913258c032c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4fe7691769d3263710397782c690d6599594d54286a543184c60b07cf76f5c17\"" Nov 12 20:56:50.292161 containerd[1577]: time="2024-11-12T20:56:50.291807992Z" level=info msg="StartContainer for \"4fe7691769d3263710397782c690d6599594d54286a543184c60b07cf76f5c17\"" Nov 12 20:56:50.364346 containerd[1577]: time="2024-11-12T20:56:50.364292205Z" level=info msg="StartContainer for \"4fe7691769d3263710397782c690d6599594d54286a543184c60b07cf76f5c17\" returns successfully" Nov 12 20:56:50.465019 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:56:50.465213 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:56:51.094936 kubelet[2780]: I1112 20:56:51.092585 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-mzdpn" podStartSLOduration=2.213139318 podStartE2EDuration="19.092521238s" podCreationTimestamp="2024-11-12 20:56:32 +0000 UTC" firstStartedPulling="2024-11-12 20:56:33.363863042 +0000 UTC m=+22.754543389" lastFinishedPulling="2024-11-12 20:56:50.243244963 +0000 UTC m=+39.633925309" observedRunningTime="2024-11-12 20:56:51.084218632 +0000 UTC m=+40.474898989" watchObservedRunningTime="2024-11-12 20:56:51.092521238 +0000 UTC m=+40.483201595" Nov 12 20:56:52.312148 kernel: bpftool[4018]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:56:52.589970 systemd-networkd[1222]: vxlan.calico: Link UP Nov 12 20:56:52.589982 systemd-networkd[1222]: vxlan.calico: Gained carrier Nov 12 20:56:54.518594 systemd-networkd[1222]: vxlan.calico: Gained IPv6LL Nov 12 20:56:54.794607 containerd[1577]: time="2024-11-12T20:56:54.793207525Z" level=info msg="StopPodSandbox for \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\"" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.851 [INFO][4104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.853 [INFO][4104] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" iface="eth0" netns="/var/run/netns/cni-a9ed1a79-377f-904f-7cb5-524742451edb" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.854 [INFO][4104] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" iface="eth0" netns="/var/run/netns/cni-a9ed1a79-377f-904f-7cb5-524742451edb" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.854 [INFO][4104] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" iface="eth0" netns="/var/run/netns/cni-a9ed1a79-377f-904f-7cb5-524742451edb" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.855 [INFO][4104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.855 [INFO][4104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.881 [INFO][4110] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.881 [INFO][4110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.881 [INFO][4110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.890 [WARNING][4110] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.890 [INFO][4110] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.892 [INFO][4110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:54.895980 containerd[1577]: 2024-11-12 20:56:54.894 [INFO][4104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:56:54.899477 containerd[1577]: time="2024-11-12T20:56:54.896206950Z" level=info msg="TearDown network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\" successfully" Nov 12 20:56:54.899477 containerd[1577]: time="2024-11-12T20:56:54.896245816Z" level=info msg="StopPodSandbox for \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\" returns successfully" Nov 12 20:56:54.899477 containerd[1577]: time="2024-11-12T20:56:54.897150633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5d9c55f-qv98l,Uid:1f661fac-f550-4093-a121-8425e9897475,Namespace:calico-system,Attempt:1,}" Nov 12 20:56:54.904252 systemd[1]: run-netns-cni\x2da9ed1a79\x2d377f\x2d904f\x2d7cb5\x2d524742451edb.mount: Deactivated successfully. Nov 12 20:56:55.052447 systemd-networkd[1222]: cali21840cf311a: Link UP Nov 12 20:56:55.055343 systemd-networkd[1222]: cali21840cf311a: Gained carrier Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:54.967 [INFO][4117] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0 calico-kube-controllers-54d5d9c55f- calico-system 1f661fac-f550-4093-a121-8425e9897475 746 0 2024-11-12 20:56:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54d5d9c55f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal calico-kube-controllers-54d5d9c55f-qv98l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali21840cf311a [] []}} ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:54.967 [INFO][4117] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.000 [INFO][4127] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" HandleID="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.012 [INFO][4127] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" HandleID="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", "pod":"calico-kube-controllers-54d5d9c55f-qv98l", "timestamp":"2024-11-12 20:56:55.000665645 +0000 UTC"}, Hostname:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.013 [INFO][4127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.013 [INFO][4127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.013 [INFO][4127] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal' Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.015 [INFO][4127] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.019 [INFO][4127] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.023 [INFO][4127] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.025 [INFO][4127] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.028 [INFO][4127] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.028 [INFO][4127] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.030 [INFO][4127] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73 Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.035 [INFO][4127] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.042 [INFO][4127] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.1/26] block=192.168.121.0/26 handle="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.042 [INFO][4127] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.1/26] handle="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.042 [INFO][4127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:55.079994 containerd[1577]: 2024-11-12 20:56:55.043 [INFO][4127] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.1/26] IPv6=[] ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" HandleID="k8s-pod-network.748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:55.082881 containerd[1577]: 2024-11-12 20:56:55.045 [INFO][4117] cni-plugin/k8s.go 386: Populated endpoint ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0", GenerateName:"calico-kube-controllers-54d5d9c55f-", Namespace:"calico-system", SelfLink:"", UID:"1f661fac-f550-4093-a121-8425e9897475", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d5d9c55f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-54d5d9c55f-qv98l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21840cf311a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:55.082881 containerd[1577]: 2024-11-12 20:56:55.045 [INFO][4117] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.1/32] ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:55.082881 containerd[1577]: 2024-11-12 20:56:55.045 [INFO][4117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21840cf311a ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:55.082881 containerd[1577]: 2024-11-12 20:56:55.053 [INFO][4117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:55.082881 containerd[1577]: 2024-11-12 20:56:55.053 [INFO][4117] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0", GenerateName:"calico-kube-controllers-54d5d9c55f-", Namespace:"calico-system", SelfLink:"", UID:"1f661fac-f550-4093-a121-8425e9897475", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d5d9c55f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73", Pod:"calico-kube-controllers-54d5d9c55f-qv98l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21840cf311a", MAC:"d6:a5:8a:ed:5c:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:55.082881 containerd[1577]: 2024-11-12 20:56:55.070 [INFO][4117] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73" Namespace="calico-system" Pod="calico-kube-controllers-54d5d9c55f-qv98l" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:56:55.115501 containerd[1577]: time="2024-11-12T20:56:55.115371780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:55.115501 containerd[1577]: time="2024-11-12T20:56:55.115431276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:55.115501 containerd[1577]: time="2024-11-12T20:56:55.115449687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:55.115955 containerd[1577]: time="2024-11-12T20:56:55.115578475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:55.199993 containerd[1577]: time="2024-11-12T20:56:55.199939205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d5d9c55f-qv98l,Uid:1f661fac-f550-4093-a121-8425e9897475,Namespace:calico-system,Attempt:1,} returns sandbox id \"748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73\"" Nov 12 20:56:55.202614 containerd[1577]: time="2024-11-12T20:56:55.202574960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:56:55.791959 containerd[1577]: time="2024-11-12T20:56:55.791580140Z" level=info msg="StopPodSandbox for \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\"" Nov 12 20:56:55.793247 containerd[1577]: time="2024-11-12T20:56:55.792316650Z" level=info msg="StopPodSandbox for \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\"" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.876 [INFO][4218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.877 [INFO][4218] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" iface="eth0" netns="/var/run/netns/cni-60d1c138-a62f-65e3-abb3-73d049614eaf" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.878 [INFO][4218] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" iface="eth0" netns="/var/run/netns/cni-60d1c138-a62f-65e3-abb3-73d049614eaf" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.879 [INFO][4218] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" iface="eth0" netns="/var/run/netns/cni-60d1c138-a62f-65e3-abb3-73d049614eaf" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.879 [INFO][4218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.879 [INFO][4218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.919 [INFO][4226] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.919 [INFO][4226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.919 [INFO][4226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.929 [WARNING][4226] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.929 [INFO][4226] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.931 [INFO][4226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:55.938079 containerd[1577]: 2024-11-12 20:56:55.933 [INFO][4218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:56:55.939422 containerd[1577]: time="2024-11-12T20:56:55.938149760Z" level=info msg="TearDown network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\" successfully" Nov 12 20:56:55.939422 containerd[1577]: time="2024-11-12T20:56:55.938227911Z" level=info msg="StopPodSandbox for \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\" returns successfully" Nov 12 20:56:55.946214 containerd[1577]: time="2024-11-12T20:56:55.945565995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7pp7,Uid:df2bc72d-5575-407e-ae43-315c296f87af,Namespace:kube-system,Attempt:1,}" Nov 12 20:56:55.946518 systemd[1]: run-netns-cni\x2d60d1c138\x2da62f\x2d65e3\x2dabb3\x2d73d049614eaf.mount: Deactivated successfully. Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.874 [INFO][4207] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.876 [INFO][4207] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" iface="eth0" netns="/var/run/netns/cni-ccf8fb0a-ccdd-b63f-8da0-4a48e4c311e4" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.878 [INFO][4207] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" iface="eth0" netns="/var/run/netns/cni-ccf8fb0a-ccdd-b63f-8da0-4a48e4c311e4" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.879 [INFO][4207] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" iface="eth0" netns="/var/run/netns/cni-ccf8fb0a-ccdd-b63f-8da0-4a48e4c311e4" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.879 [INFO][4207] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.879 [INFO][4207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.920 [INFO][4227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.921 [INFO][4227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.931 [INFO][4227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.943 [WARNING][4227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.943 [INFO][4227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.947 [INFO][4227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:55.952654 containerd[1577]: 2024-11-12 20:56:55.950 [INFO][4207] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:56:55.957008 containerd[1577]: time="2024-11-12T20:56:55.952742059Z" level=info msg="TearDown network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\" successfully" Nov 12 20:56:55.957008 containerd[1577]: time="2024-11-12T20:56:55.952773039Z" level=info msg="StopPodSandbox for \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\" returns successfully" Nov 12 20:56:55.957008 containerd[1577]: time="2024-11-12T20:56:55.953472839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-pz72g,Uid:2176e7e1-d94c-479d-92e3-e9f80e8d0f4d,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:56:55.958383 systemd[1]: run-netns-cni\x2dccf8fb0a\x2dccdd\x2db63f\x2d8da0\x2d4a48e4c311e4.mount: Deactivated successfully. Nov 12 20:56:56.182545 systemd-networkd[1222]: cali21840cf311a: Gained IPv6LL Nov 12 20:56:56.295220 systemd-networkd[1222]: cali6d49ba99624: Link UP Nov 12 20:56:56.297817 systemd-networkd[1222]: cali6d49ba99624: Gained carrier Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.100 [INFO][4238] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0 coredns-76f75df574- kube-system df2bc72d-5575-407e-ae43-315c296f87af 755 0 2024-11-12 20:56:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal coredns-76f75df574-s7pp7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6d49ba99624 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.101 [INFO][4238] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.198 [INFO][4261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" HandleID="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.216 [INFO][4261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" HandleID="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000493750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", "pod":"coredns-76f75df574-s7pp7", "timestamp":"2024-11-12 20:56:56.198807686 +0000 UTC"}, Hostname:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.216 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.216 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.217 [INFO][4261] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal' Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.220 [INFO][4261] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.226 [INFO][4261] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.233 [INFO][4261] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.237 [INFO][4261] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.240 [INFO][4261] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.240 [INFO][4261] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.243 [INFO][4261] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.255 [INFO][4261] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.268 [INFO][4261] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.2/26] block=192.168.121.0/26 handle="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.268 [INFO][4261] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.2/26] handle="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.268 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:56.328370 containerd[1577]: 2024-11-12 20:56:56.269 [INFO][4261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.2/26] IPv6=[] ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" HandleID="k8s-pod-network.9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:56.332017 containerd[1577]: 2024-11-12 20:56:56.278 [INFO][4238] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"df2bc72d-5575-407e-ae43-315c296f87af", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-s7pp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49ba99624", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:56.332017 containerd[1577]: 2024-11-12 20:56:56.280 [INFO][4238] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.2/32] ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:56.332017 containerd[1577]: 2024-11-12 20:56:56.280 [INFO][4238] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d49ba99624 ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:56.332017 containerd[1577]: 2024-11-12 20:56:56.294 [INFO][4238] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:56.332017 containerd[1577]: 2024-11-12 20:56:56.295 [INFO][4238] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"df2bc72d-5575-407e-ae43-315c296f87af", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d", Pod:"coredns-76f75df574-s7pp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49ba99624", MAC:"fe:04:39:ff:f9:78", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:56.332017 containerd[1577]: 2024-11-12 20:56:56.319 [INFO][4238] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d" Namespace="kube-system" Pod="coredns-76f75df574-s7pp7" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:56:56.364806 systemd-networkd[1222]: calic1d581b10c7: Link UP Nov 12 20:56:56.369202 systemd-networkd[1222]: calic1d581b10c7: Gained carrier Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.141 [INFO][4247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0 calico-apiserver-76cbf9b5bf- calico-apiserver 2176e7e1-d94c-479d-92e3-e9f80e8d0f4d 754 0 2024-11-12 20:56:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76cbf9b5bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal calico-apiserver-76cbf9b5bf-pz72g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic1d581b10c7 [] []}} ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.142 [INFO][4247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.247 [INFO][4265] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" HandleID="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.270 [INFO][4265] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" HandleID="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025c0c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", "pod":"calico-apiserver-76cbf9b5bf-pz72g", "timestamp":"2024-11-12 20:56:56.247379935 +0000 UTC"}, Hostname:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.270 [INFO][4265] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.270 [INFO][4265] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.272 [INFO][4265] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal' Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.276 [INFO][4265] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.291 [INFO][4265] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.304 [INFO][4265] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.308 [INFO][4265] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.316 [INFO][4265] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.316 [INFO][4265] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.325 [INFO][4265] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642 Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.340 [INFO][4265] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.354 [INFO][4265] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.3/26] block=192.168.121.0/26 handle="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.354 [INFO][4265] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.3/26] handle="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.354 [INFO][4265] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:56.407388 containerd[1577]: 2024-11-12 20:56:56.354 [INFO][4265] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.3/26] IPv6=[] ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" HandleID="k8s-pod-network.c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:56.409626 containerd[1577]: 2024-11-12 20:56:56.358 [INFO][4247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-76cbf9b5bf-pz72g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1d581b10c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:56.409626 containerd[1577]: 2024-11-12 20:56:56.358 [INFO][4247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.3/32] ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:56.409626 containerd[1577]: 2024-11-12 20:56:56.358 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic1d581b10c7 ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:56.409626 containerd[1577]: 2024-11-12 20:56:56.372 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:56.409626 containerd[1577]: 2024-11-12 20:56:56.372 [INFO][4247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642", Pod:"calico-apiserver-76cbf9b5bf-pz72g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1d581b10c7", MAC:"c2:4d:ee:d4:e2:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:56.409626 containerd[1577]: 2024-11-12 20:56:56.395 [INFO][4247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-pz72g" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:56:56.444134 containerd[1577]: time="2024-11-12T20:56:56.442840650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:56.444134 containerd[1577]: time="2024-11-12T20:56:56.443964553Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:56.444401 containerd[1577]: time="2024-11-12T20:56:56.443987372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:56.445014 containerd[1577]: time="2024-11-12T20:56:56.444859351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:56.539278 containerd[1577]: time="2024-11-12T20:56:56.539211999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s7pp7,Uid:df2bc72d-5575-407e-ae43-315c296f87af,Namespace:kube-system,Attempt:1,} returns sandbox id \"9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d\"" Nov 12 20:56:56.543753 containerd[1577]: time="2024-11-12T20:56:56.542584121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:56.543753 containerd[1577]: time="2024-11-12T20:56:56.542655073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:56.543753 containerd[1577]: time="2024-11-12T20:56:56.542683606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:56.544798 containerd[1577]: time="2024-11-12T20:56:56.544253641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:56.550943 containerd[1577]: time="2024-11-12T20:56:56.550908798Z" level=info msg="CreateContainer within sandbox \"9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:56.585474 containerd[1577]: time="2024-11-12T20:56:56.585133991Z" level=info msg="CreateContainer within sandbox \"9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a4e6f4931cf92d22e66d024dded7a9470b634c32ff636b810398dcfbd12ba68a\"" Nov 12 20:56:56.588940 containerd[1577]: time="2024-11-12T20:56:56.588889158Z" level=info msg="StartContainer for \"a4e6f4931cf92d22e66d024dded7a9470b634c32ff636b810398dcfbd12ba68a\"" Nov 12 20:56:56.707633 containerd[1577]: time="2024-11-12T20:56:56.706879875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-pz72g,Uid:2176e7e1-d94c-479d-92e3-e9f80e8d0f4d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642\"" Nov 12 20:56:56.739166 containerd[1577]: time="2024-11-12T20:56:56.737588012Z" level=info msg="StartContainer for \"a4e6f4931cf92d22e66d024dded7a9470b634c32ff636b810398dcfbd12ba68a\" returns successfully" Nov 12 20:56:56.798113 containerd[1577]: time="2024-11-12T20:56:56.795338929Z" level=info msg="StopPodSandbox for \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\"" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.899 [INFO][4441] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.899 [INFO][4441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" iface="eth0" netns="/var/run/netns/cni-8e38c0b2-16dc-bacb-a8af-5c5dca08843b" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.900 [INFO][4441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" iface="eth0" netns="/var/run/netns/cni-8e38c0b2-16dc-bacb-a8af-5c5dca08843b" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.900 [INFO][4441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" iface="eth0" netns="/var/run/netns/cni-8e38c0b2-16dc-bacb-a8af-5c5dca08843b" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.900 [INFO][4441] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.900 [INFO][4441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.957 [INFO][4447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.958 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.958 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.971 [WARNING][4447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.972 [INFO][4447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.975 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:56.983805 containerd[1577]: 2024-11-12 20:56:56.979 [INFO][4441] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:56:56.983805 containerd[1577]: time="2024-11-12T20:56:56.982666653Z" level=info msg="TearDown network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\" successfully" Nov 12 20:56:56.983805 containerd[1577]: time="2024-11-12T20:56:56.982703977Z" level=info msg="StopPodSandbox for \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\" returns successfully" Nov 12 20:56:56.989885 containerd[1577]: time="2024-11-12T20:56:56.984328312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-sf428,Uid:43370d02-66a8-4ab1-8864-281286226360,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:56:56.990815 systemd[1]: run-netns-cni\x2d8e38c0b2\x2d16dc\x2dbacb\x2da8af\x2d5c5dca08843b.mount: Deactivated successfully. Nov 12 20:56:57.131549 kubelet[2780]: I1112 20:56:57.131169 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s7pp7" podStartSLOduration=33.130887645 podStartE2EDuration="33.130887645s" podCreationTimestamp="2024-11-12 20:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:57.106350864 +0000 UTC m=+46.497031221" watchObservedRunningTime="2024-11-12 20:56:57.130887645 +0000 UTC m=+46.521567982" Nov 12 20:56:57.295383 systemd-networkd[1222]: cali513f4a92c06: Link UP Nov 12 20:56:57.297658 systemd-networkd[1222]: cali513f4a92c06: Gained carrier Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.099 [INFO][4454] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0 calico-apiserver-76cbf9b5bf- calico-apiserver 43370d02-66a8-4ab1-8864-281286226360 769 0 2024-11-12 20:56:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76cbf9b5bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal calico-apiserver-76cbf9b5bf-sf428 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali513f4a92c06 [] []}} ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.099 [INFO][4454] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.224 [INFO][4465] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" HandleID="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.239 [INFO][4465] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" HandleID="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000305290), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", "pod":"calico-apiserver-76cbf9b5bf-sf428", "timestamp":"2024-11-12 20:56:57.224362625 +0000 UTC"}, Hostname:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.239 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.239 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.239 [INFO][4465] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal' Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.242 [INFO][4465] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.248 [INFO][4465] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.256 [INFO][4465] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.260 [INFO][4465] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.263 [INFO][4465] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.263 [INFO][4465] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.265 [INFO][4465] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.272 [INFO][4465] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.285 [INFO][4465] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.4/26] block=192.168.121.0/26 handle="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.285 [INFO][4465] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.4/26] handle="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.285 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:57.324729 containerd[1577]: 2024-11-12 20:56:57.285 [INFO][4465] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.4/26] IPv6=[] ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" HandleID="k8s-pod-network.c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:57.326037 containerd[1577]: 2024-11-12 20:56:57.290 [INFO][4454] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"43370d02-66a8-4ab1-8864-281286226360", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-76cbf9b5bf-sf428", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali513f4a92c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:57.326037 containerd[1577]: 2024-11-12 20:56:57.290 [INFO][4454] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.4/32] ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:57.326037 containerd[1577]: 2024-11-12 20:56:57.290 [INFO][4454] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali513f4a92c06 ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:57.326037 containerd[1577]: 2024-11-12 20:56:57.298 [INFO][4454] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:57.326037 containerd[1577]: 2024-11-12 20:56:57.301 [INFO][4454] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"43370d02-66a8-4ab1-8864-281286226360", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b", Pod:"calico-apiserver-76cbf9b5bf-sf428", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali513f4a92c06", MAC:"ca:f0:70:cd:9a:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:57.326037 containerd[1577]: 2024-11-12 20:56:57.320 [INFO][4454] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b" Namespace="calico-apiserver" Pod="calico-apiserver-76cbf9b5bf-sf428" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:56:57.383220 containerd[1577]: time="2024-11-12T20:56:57.382998661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:57.383920 containerd[1577]: time="2024-11-12T20:56:57.383264938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:57.385041 containerd[1577]: time="2024-11-12T20:56:57.384571041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:57.385452 containerd[1577]: time="2024-11-12T20:56:57.384978354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:57.514655 containerd[1577]: time="2024-11-12T20:56:57.514567048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76cbf9b5bf-sf428,Uid:43370d02-66a8-4ab1-8864-281286226360,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b\"" Nov 12 20:56:57.783317 systemd-networkd[1222]: calic1d581b10c7: Gained IPv6LL Nov 12 20:56:57.794279 containerd[1577]: time="2024-11-12T20:56:57.794152996Z" level=info msg="StopPodSandbox for \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\"" Nov 12 20:56:57.796342 containerd[1577]: time="2024-11-12T20:56:57.794941376Z" level=info msg="StopPodSandbox for \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\"" Nov 12 20:56:58.038793 containerd[1577]: time="2024-11-12T20:56:58.038647131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:58.042034 containerd[1577]: time="2024-11-12T20:56:58.041976005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:56:58.044159 containerd[1577]: time="2024-11-12T20:56:58.043304663Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:58.049013 containerd[1577]: time="2024-11-12T20:56:58.048972386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:58.052454 containerd[1577]: time="2024-11-12T20:56:58.051810107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.849183475s" Nov 12 20:56:58.052741 containerd[1577]: time="2024-11-12T20:56:58.052712211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:56:58.058295 containerd[1577]: time="2024-11-12T20:56:58.057855861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:56:58.093228 containerd[1577]: time="2024-11-12T20:56:58.093186427Z" level=info msg="CreateContainer within sandbox \"748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:57.977 [INFO][4560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:57.977 [INFO][4560] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" iface="eth0" netns="/var/run/netns/cni-0195caeb-2450-9ef1-1eb0-6ae6550f0dfd" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:57.977 [INFO][4560] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" iface="eth0" netns="/var/run/netns/cni-0195caeb-2450-9ef1-1eb0-6ae6550f0dfd" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:57.981 [INFO][4560] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" iface="eth0" netns="/var/run/netns/cni-0195caeb-2450-9ef1-1eb0-6ae6550f0dfd" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:57.981 [INFO][4560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:57.982 [INFO][4560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:58.063 [INFO][4574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:58.071 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:58.071 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:58.101 [WARNING][4574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:58.101 [INFO][4574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:58.104 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:58.108069 containerd[1577]: 2024-11-12 20:56:58.105 [INFO][4560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:56:58.113693 containerd[1577]: time="2024-11-12T20:56:58.113167304Z" level=info msg="TearDown network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\" successfully" Nov 12 20:56:58.113693 containerd[1577]: time="2024-11-12T20:56:58.113203085Z" level=info msg="StopPodSandbox for \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\" returns successfully" Nov 12 20:56:58.119138 containerd[1577]: time="2024-11-12T20:56:58.118159202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r8k87,Uid:017708f0-c5c9-4372-bddb-a2a7a49fd2e0,Namespace:kube-system,Attempt:1,}" Nov 12 20:56:58.122828 systemd[1]: run-netns-cni\x2d0195caeb\x2d2450\x2d9ef1\x2d1eb0\x2d6ae6550f0dfd.mount: Deactivated successfully. Nov 12 20:56:58.137844 containerd[1577]: time="2024-11-12T20:56:58.137116722Z" level=info msg="CreateContainer within sandbox \"748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fea9c18a28ec4ffc5b396ec346e8fde888037c650fc1f01ad89cdd00e0b8f962\"" Nov 12 20:56:58.138872 containerd[1577]: time="2024-11-12T20:56:58.138526574Z" level=info msg="StartContainer for \"fea9c18a28ec4ffc5b396ec346e8fde888037c650fc1f01ad89cdd00e0b8f962\"" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:57.964 [INFO][4553] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:57.964 [INFO][4553] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" iface="eth0" netns="/var/run/netns/cni-6ff0cec8-4f5c-2883-a313-848f84988476" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:57.965 [INFO][4553] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" iface="eth0" netns="/var/run/netns/cni-6ff0cec8-4f5c-2883-a313-848f84988476" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:57.967 [INFO][4553] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" iface="eth0" netns="/var/run/netns/cni-6ff0cec8-4f5c-2883-a313-848f84988476" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:57.967 [INFO][4553] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:57.967 [INFO][4553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:58.099 [INFO][4570] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:58.099 [INFO][4570] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:58.104 [INFO][4570] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:58.115 [WARNING][4570] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:58.116 [INFO][4570] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:58.119 [INFO][4570] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:58.140486 containerd[1577]: 2024-11-12 20:56:58.132 [INFO][4553] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:56:58.142145 containerd[1577]: time="2024-11-12T20:56:58.141390348Z" level=info msg="TearDown network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\" successfully" Nov 12 20:56:58.142145 containerd[1577]: time="2024-11-12T20:56:58.141418263Z" level=info msg="StopPodSandbox for \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\" returns successfully" Nov 12 20:56:58.142145 containerd[1577]: time="2024-11-12T20:56:58.142048844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r88,Uid:1ebb665c-7489-46df-9cad-fdce94e5d49a,Namespace:calico-system,Attempt:1,}" Nov 12 20:56:58.168048 systemd-networkd[1222]: cali6d49ba99624: Gained IPv6LL Nov 12 20:56:58.350816 containerd[1577]: time="2024-11-12T20:56:58.346545040Z" level=info msg="StartContainer for \"fea9c18a28ec4ffc5b396ec346e8fde888037c650fc1f01ad89cdd00e0b8f962\" returns successfully" Nov 12 20:56:58.427701 systemd-networkd[1222]: cali793b0fed337: Link UP Nov 12 20:56:58.430814 systemd-networkd[1222]: cali793b0fed337: Gained carrier Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.253 [INFO][4598] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0 coredns-76f75df574- kube-system 017708f0-c5c9-4372-bddb-a2a7a49fd2e0 786 0 2024-11-12 20:56:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal coredns-76f75df574-r8k87 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali793b0fed337 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.253 [INFO][4598] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.335 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" HandleID="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.368 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" HandleID="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050db0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", "pod":"coredns-76f75df574-r8k87", "timestamp":"2024-11-12 20:56:58.335905808 +0000 UTC"}, Hostname:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.368 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.368 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.368 [INFO][4637] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal' Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.373 [INFO][4637] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.381 [INFO][4637] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.388 [INFO][4637] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.391 [INFO][4637] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.395 [INFO][4637] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.395 [INFO][4637] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.398 [INFO][4637] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1 Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.404 [INFO][4637] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.413 [INFO][4637] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.5/26] block=192.168.121.0/26 handle="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.413 [INFO][4637] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.5/26] handle="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.413 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:58.472777 containerd[1577]: 2024-11-12 20:56:58.413 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.5/26] IPv6=[] ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" HandleID="k8s-pod-network.ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.474930 containerd[1577]: 2024-11-12 20:56:58.417 [INFO][4598] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"017708f0-c5c9-4372-bddb-a2a7a49fd2e0", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-76f75df574-r8k87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali793b0fed337", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:58.474930 containerd[1577]: 2024-11-12 20:56:58.417 [INFO][4598] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.5/32] ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.474930 containerd[1577]: 2024-11-12 20:56:58.417 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali793b0fed337 ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.474930 containerd[1577]: 2024-11-12 20:56:58.433 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.474930 containerd[1577]: 2024-11-12 20:56:58.439 [INFO][4598] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"017708f0-c5c9-4372-bddb-a2a7a49fd2e0", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1", Pod:"coredns-76f75df574-r8k87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali793b0fed337", MAC:"46:45:42:88:cf:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:58.474930 containerd[1577]: 2024-11-12 20:56:58.461 [INFO][4598] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1" Namespace="kube-system" Pod="coredns-76f75df574-r8k87" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:56:58.548082 systemd-networkd[1222]: calia725673251e: Link UP Nov 12 20:56:58.548455 systemd-networkd[1222]: calia725673251e: Gained carrier Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.283 [INFO][4614] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0 csi-node-driver- calico-system 1ebb665c-7489-46df-9cad-fdce94e5d49a 785 0 2024-11-12 20:56:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal csi-node-driver-54r88 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia725673251e [] []}} ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.283 [INFO][4614] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.378 [INFO][4641] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" HandleID="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.399 [INFO][4641] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" HandleID="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bc860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", "pod":"csi-node-driver-54r88", "timestamp":"2024-11-12 20:56:58.378920617 +0000 UTC"}, Hostname:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.399 [INFO][4641] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.414 [INFO][4641] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.414 [INFO][4641] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal' Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.416 [INFO][4641] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.430 [INFO][4641] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.458 [INFO][4641] ipam/ipam.go 489: Trying affinity for 192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.472 [INFO][4641] ipam/ipam.go 155: Attempting to load block cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.481 [INFO][4641] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.121.0/26 host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.482 [INFO][4641] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.121.0/26 handle="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.484 [INFO][4641] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5 Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.499 [INFO][4641] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.121.0/26 handle="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.511 [INFO][4641] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.121.6/26] block=192.168.121.0/26 handle="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.511 [INFO][4641] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.121.6/26] handle="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" host="ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal" Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.511 [INFO][4641] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:56:58.588239 containerd[1577]: 2024-11-12 20:56:58.511 [INFO][4641] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.121.6/26] IPv6=[] ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" HandleID="k8s-pod-network.707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.591638 containerd[1577]: 2024-11-12 20:56:58.520 [INFO][4614] cni-plugin/k8s.go 386: Populated endpoint ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ebb665c-7489-46df-9cad-fdce94e5d49a", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-54r88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia725673251e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:58.591638 containerd[1577]: 2024-11-12 20:56:58.523 [INFO][4614] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.121.6/32] ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.591638 containerd[1577]: 2024-11-12 20:56:58.525 [INFO][4614] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia725673251e ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.591638 containerd[1577]: 2024-11-12 20:56:58.541 [INFO][4614] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.591638 containerd[1577]: 2024-11-12 20:56:58.544 [INFO][4614] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ebb665c-7489-46df-9cad-fdce94e5d49a", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5", Pod:"csi-node-driver-54r88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia725673251e", MAC:"fe:0e:d7:20:b4:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:56:58.591638 containerd[1577]: 2024-11-12 20:56:58.571 [INFO][4614] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5" Namespace="calico-system" Pod="csi-node-driver-54r88" WorkloadEndpoint="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:56:58.599016 containerd[1577]: time="2024-11-12T20:56:58.598711117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:58.599016 containerd[1577]: time="2024-11-12T20:56:58.598803079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:58.599016 containerd[1577]: time="2024-11-12T20:56:58.598824352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:58.599409 containerd[1577]: time="2024-11-12T20:56:58.598960181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:58.656922 containerd[1577]: time="2024-11-12T20:56:58.655939727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:58.659477 containerd[1577]: time="2024-11-12T20:56:58.658637153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:58.659477 containerd[1577]: time="2024-11-12T20:56:58.658697324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:58.659477 containerd[1577]: time="2024-11-12T20:56:58.659131263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:58.715651 containerd[1577]: time="2024-11-12T20:56:58.715588782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r8k87,Uid:017708f0-c5c9-4372-bddb-a2a7a49fd2e0,Namespace:kube-system,Attempt:1,} returns sandbox id \"ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1\"" Nov 12 20:56:58.729858 containerd[1577]: time="2024-11-12T20:56:58.729645752Z" level=info msg="CreateContainer within sandbox \"ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:56:58.742263 systemd-networkd[1222]: cali513f4a92c06: Gained IPv6LL Nov 12 20:56:58.752142 containerd[1577]: time="2024-11-12T20:56:58.751521083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54r88,Uid:1ebb665c-7489-46df-9cad-fdce94e5d49a,Namespace:calico-system,Attempt:1,} returns sandbox id \"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5\"" Nov 12 20:56:58.754937 containerd[1577]: time="2024-11-12T20:56:58.754846593Z" level=info msg="CreateContainer within sandbox \"ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cc7faf8efbd6b617b59b9395b79b80f19c0988355eeed0d2a4f1c611ab1d1b9\"" Nov 12 20:56:58.755495 containerd[1577]: time="2024-11-12T20:56:58.755406370Z" level=info msg="StartContainer for \"2cc7faf8efbd6b617b59b9395b79b80f19c0988355eeed0d2a4f1c611ab1d1b9\"" Nov 12 20:56:58.828288 containerd[1577]: time="2024-11-12T20:56:58.828199114Z" level=info msg="StartContainer for \"2cc7faf8efbd6b617b59b9395b79b80f19c0988355eeed0d2a4f1c611ab1d1b9\" returns successfully" Nov 12 20:56:58.980055 systemd[1]: run-netns-cni\x2d6ff0cec8\x2d4f5c\x2d2883\x2da313\x2d848f84988476.mount: Deactivated successfully. Nov 12 20:56:59.137049 kubelet[2780]: I1112 20:56:59.136322 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-r8k87" podStartSLOduration=35.135478598 podStartE2EDuration="35.135478598s" podCreationTimestamp="2024-11-12 20:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:59.130809926 +0000 UTC m=+48.521490282" watchObservedRunningTime="2024-11-12 20:56:59.135478598 +0000 UTC m=+48.526159010" Nov 12 20:56:59.164960 kubelet[2780]: I1112 20:56:59.162707 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54d5d9c55f-qv98l" podStartSLOduration=24.310406123 podStartE2EDuration="27.162646298s" podCreationTimestamp="2024-11-12 20:56:32 +0000 UTC" firstStartedPulling="2024-11-12 20:56:55.201648085 +0000 UTC m=+44.592328427" lastFinishedPulling="2024-11-12 20:56:58.053888254 +0000 UTC m=+47.444568602" observedRunningTime="2024-11-12 20:56:59.15996868 +0000 UTC m=+48.550649038" watchObservedRunningTime="2024-11-12 20:56:59.162646298 +0000 UTC m=+48.553326657" Nov 12 20:56:59.638900 systemd-networkd[1222]: calia725673251e: Gained IPv6LL Nov 12 20:57:00.150972 systemd-networkd[1222]: cali793b0fed337: Gained IPv6LL Nov 12 20:57:00.314638 containerd[1577]: time="2024-11-12T20:57:00.314550844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:00.316077 containerd[1577]: time="2024-11-12T20:57:00.315968699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:57:00.317538 containerd[1577]: time="2024-11-12T20:57:00.317460754Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:00.320660 containerd[1577]: time="2024-11-12T20:57:00.320621736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:00.322460 containerd[1577]: time="2024-11-12T20:57:00.321637418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.263730697s" Nov 12 20:57:00.322460 containerd[1577]: time="2024-11-12T20:57:00.321684137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:57:00.324516 containerd[1577]: time="2024-11-12T20:57:00.323167451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:57:00.324648 containerd[1577]: time="2024-11-12T20:57:00.324253052Z" level=info msg="CreateContainer within sandbox \"c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:57:00.348235 containerd[1577]: time="2024-11-12T20:57:00.348181402Z" level=info msg="CreateContainer within sandbox \"c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09f0592b3cd9898e9628596f67d16c734e3f9fbd0c7ca74911c8f65c050564cf\"" Nov 12 20:57:00.350445 containerd[1577]: time="2024-11-12T20:57:00.348885154Z" level=info msg="StartContainer for \"09f0592b3cd9898e9628596f67d16c734e3f9fbd0c7ca74911c8f65c050564cf\"" Nov 12 20:57:00.455856 containerd[1577]: time="2024-11-12T20:57:00.455699496Z" level=info msg="StartContainer for \"09f0592b3cd9898e9628596f67d16c734e3f9fbd0c7ca74911c8f65c050564cf\" returns successfully" Nov 12 20:57:00.529724 containerd[1577]: time="2024-11-12T20:57:00.528511157Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:00.531771 containerd[1577]: time="2024-11-12T20:57:00.531718589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:57:00.534864 containerd[1577]: time="2024-11-12T20:57:00.534804913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 210.247653ms" Nov 12 20:57:00.535032 containerd[1577]: time="2024-11-12T20:57:00.535011375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:57:00.537535 containerd[1577]: time="2024-11-12T20:57:00.537498634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:57:00.541780 containerd[1577]: time="2024-11-12T20:57:00.541342478Z" level=info msg="CreateContainer within sandbox \"c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:57:00.559056 containerd[1577]: time="2024-11-12T20:57:00.559014687Z" level=info msg="CreateContainer within sandbox \"c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5ee2bbdbfd782a1d06fc8e25035a2b2ccd02d7966cba2dfb36fd794dba1d5e84\"" Nov 12 20:57:00.561825 containerd[1577]: time="2024-11-12T20:57:00.560972904Z" level=info msg="StartContainer for \"5ee2bbdbfd782a1d06fc8e25035a2b2ccd02d7966cba2dfb36fd794dba1d5e84\"" Nov 12 20:57:00.698517 containerd[1577]: time="2024-11-12T20:57:00.698231205Z" level=info msg="StartContainer for \"5ee2bbdbfd782a1d06fc8e25035a2b2ccd02d7966cba2dfb36fd794dba1d5e84\" returns successfully" Nov 12 20:57:01.239581 kubelet[2780]: I1112 20:57:01.235562 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-pz72g" podStartSLOduration=25.624441689 podStartE2EDuration="29.235500714s" podCreationTimestamp="2024-11-12 20:56:32 +0000 UTC" firstStartedPulling="2024-11-12 20:56:56.711060402 +0000 UTC m=+46.101740745" lastFinishedPulling="2024-11-12 20:57:00.322119426 +0000 UTC m=+49.712799770" observedRunningTime="2024-11-12 20:57:01.233427414 +0000 UTC m=+50.624107770" watchObservedRunningTime="2024-11-12 20:57:01.235500714 +0000 UTC m=+50.626181070" Nov 12 20:57:01.239581 kubelet[2780]: I1112 20:57:01.235703 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76cbf9b5bf-sf428" podStartSLOduration=26.217123652 podStartE2EDuration="29.235664613s" podCreationTimestamp="2024-11-12 20:56:32 +0000 UTC" firstStartedPulling="2024-11-12 20:56:57.516973221 +0000 UTC m=+46.907653562" lastFinishedPulling="2024-11-12 20:57:00.535514178 +0000 UTC m=+49.926194523" observedRunningTime="2024-11-12 20:57:01.209928206 +0000 UTC m=+50.600608563" watchObservedRunningTime="2024-11-12 20:57:01.235664613 +0000 UTC m=+50.626344970" Nov 12 20:57:01.933133 containerd[1577]: time="2024-11-12T20:57:01.932559775Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:01.935082 containerd[1577]: time="2024-11-12T20:57:01.935022946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:57:01.938323 containerd[1577]: time="2024-11-12T20:57:01.938190686Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:01.939994 containerd[1577]: time="2024-11-12T20:57:01.939954921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:01.941187 containerd[1577]: time="2024-11-12T20:57:01.941151144Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.403452403s" Nov 12 20:57:01.941280 containerd[1577]: time="2024-11-12T20:57:01.941194624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:57:01.945443 containerd[1577]: time="2024-11-12T20:57:01.945377528Z" level=info msg="CreateContainer within sandbox \"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:57:01.973366 containerd[1577]: time="2024-11-12T20:57:01.973316097Z" level=info msg="CreateContainer within sandbox \"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"bb7cc70570c178f03971f7f9d4028f51afb8195bf2074c26a514f1ae1e19a18a\"" Nov 12 20:57:01.975650 containerd[1577]: time="2024-11-12T20:57:01.975006848Z" level=info msg="StartContainer for \"bb7cc70570c178f03971f7f9d4028f51afb8195bf2074c26a514f1ae1e19a18a\"" Nov 12 20:57:02.169775 containerd[1577]: time="2024-11-12T20:57:02.169693607Z" level=info msg="StartContainer for \"bb7cc70570c178f03971f7f9d4028f51afb8195bf2074c26a514f1ae1e19a18a\" returns successfully" Nov 12 20:57:02.175442 containerd[1577]: time="2024-11-12T20:57:02.174749322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:57:02.203223 kubelet[2780]: I1112 20:57:02.200574 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:02.357397 ntpd[1521]: Listen normally on 6 vxlan.calico 192.168.121.0:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 6 vxlan.calico 192.168.121.0:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 7 vxlan.calico [fe80::644f:5fff:fe6b:4508%4]:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 8 cali21840cf311a [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 9 cali6d49ba99624 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 10 calic1d581b10c7 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 11 cali513f4a92c06 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 12 cali793b0fed337 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:57:02.358475 ntpd[1521]: 12 Nov 20:57:02 ntpd[1521]: Listen normally on 13 calia725673251e [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:57:02.357508 ntpd[1521]: Listen normally on 7 vxlan.calico [fe80::644f:5fff:fe6b:4508%4]:123 Nov 12 20:57:02.357579 ntpd[1521]: Listen normally on 8 cali21840cf311a [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 20:57:02.357635 ntpd[1521]: Listen normally on 9 cali6d49ba99624 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 20:57:02.357686 ntpd[1521]: Listen normally on 10 calic1d581b10c7 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:57:02.357745 ntpd[1521]: Listen normally on 11 cali513f4a92c06 [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:57:02.357795 ntpd[1521]: Listen normally on 12 cali793b0fed337 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:57:02.357845 ntpd[1521]: Listen normally on 13 calia725673251e [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:57:03.613703 containerd[1577]: time="2024-11-12T20:57:03.612951589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:03.614476 containerd[1577]: time="2024-11-12T20:57:03.614411272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:57:03.617421 containerd[1577]: time="2024-11-12T20:57:03.617378360Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:03.636188 containerd[1577]: time="2024-11-12T20:57:03.632805983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:03.636188 containerd[1577]: time="2024-11-12T20:57:03.634013376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.459201042s" Nov 12 20:57:03.636188 containerd[1577]: time="2024-11-12T20:57:03.634056935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:57:03.644354 containerd[1577]: time="2024-11-12T20:57:03.643991972Z" level=info msg="CreateContainer within sandbox \"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:57:03.675689 containerd[1577]: time="2024-11-12T20:57:03.674957838Z" level=info msg="CreateContainer within sandbox \"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"51fea3e6436544e1056a38dbf608825b75746b6d2c14dfad983592323e57b7bf\"" Nov 12 20:57:03.677647 containerd[1577]: time="2024-11-12T20:57:03.677365207Z" level=info msg="StartContainer for \"51fea3e6436544e1056a38dbf608825b75746b6d2c14dfad983592323e57b7bf\"" Nov 12 20:57:03.811716 containerd[1577]: time="2024-11-12T20:57:03.811663781Z" level=info msg="StartContainer for \"51fea3e6436544e1056a38dbf608825b75746b6d2c14dfad983592323e57b7bf\" returns successfully" Nov 12 20:57:03.951388 kubelet[2780]: I1112 20:57:03.951335 2780 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:57:03.951388 kubelet[2780]: I1112 20:57:03.951384 2780 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:57:04.226136 kubelet[2780]: I1112 20:57:04.225953 2780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-54r88" podStartSLOduration=27.341201824 podStartE2EDuration="32.225894522s" podCreationTimestamp="2024-11-12 20:56:32 +0000 UTC" firstStartedPulling="2024-11-12 20:56:58.753268681 +0000 UTC m=+48.143949020" lastFinishedPulling="2024-11-12 20:57:03.637961371 +0000 UTC m=+53.028641718" observedRunningTime="2024-11-12 20:57:04.224768966 +0000 UTC m=+53.615449322" watchObservedRunningTime="2024-11-12 20:57:04.225894522 +0000 UTC m=+53.616574878" Nov 12 20:57:05.573515 systemd[1]: Started sshd@7-10.128.0.109:22-139.178.89.65:34250.service - OpenSSH per-connection server daemon (139.178.89.65:34250). Nov 12 20:57:05.859314 sshd[5000]: Accepted publickey for core from 139.178.89.65 port 34250 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:05.861350 sshd[5000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:05.868628 systemd-logind[1559]: New session 8 of user core. Nov 12 20:57:05.874433 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:57:06.159705 sshd[5000]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:06.164907 systemd[1]: sshd@7-10.128.0.109:22-139.178.89.65:34250.service: Deactivated successfully. Nov 12 20:57:06.171006 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:57:06.171575 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:57:06.174505 systemd-logind[1559]: Removed session 8. Nov 12 20:57:07.512931 systemd[1]: run-containerd-runc-k8s.io-4fe7691769d3263710397782c690d6599594d54286a543184c60b07cf76f5c17-runc.J88bPC.mount: Deactivated successfully. Nov 12 20:57:10.798132 containerd[1577]: time="2024-11-12T20:57:10.795301136Z" level=info msg="StopPodSandbox for \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\"" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.851 [WARNING][5053] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"df2bc72d-5575-407e-ae43-315c296f87af", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d", Pod:"coredns-76f75df574-s7pp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49ba99624", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.851 [INFO][5053] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.852 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" iface="eth0" netns="" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.852 [INFO][5053] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.852 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.879 [INFO][5059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.879 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.879 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.886 [WARNING][5059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.887 [INFO][5059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.889 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:10.894132 containerd[1577]: 2024-11-12 20:57:10.890 [INFO][5053] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:10.894132 containerd[1577]: time="2024-11-12T20:57:10.892317223Z" level=info msg="TearDown network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\" successfully" Nov 12 20:57:10.894132 containerd[1577]: time="2024-11-12T20:57:10.892372210Z" level=info msg="StopPodSandbox for \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\" returns successfully" Nov 12 20:57:10.894132 containerd[1577]: time="2024-11-12T20:57:10.893398390Z" level=info msg="RemovePodSandbox for \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\"" Nov 12 20:57:10.894132 containerd[1577]: time="2024-11-12T20:57:10.893440399Z" level=info msg="Forcibly stopping sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\"" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:10.952 [WARNING][5078] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"df2bc72d-5575-407e-ae43-315c296f87af", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"9064daf94caf62d7f45761f31b0dc22832269f34007f944cc3a4909028facc5d", Pod:"coredns-76f75df574-s7pp7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6d49ba99624", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:10.953 [INFO][5078] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:10.954 [INFO][5078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" iface="eth0" netns="" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:10.954 [INFO][5078] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:10.954 [INFO][5078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:11.074 [INFO][5084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:11.074 [INFO][5084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:11.075 [INFO][5084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:11.082 [WARNING][5084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:11.082 [INFO][5084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" HandleID="k8s-pod-network.8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--s7pp7-eth0" Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:11.084 [INFO][5084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:11.087593 containerd[1577]: 2024-11-12 20:57:11.086 [INFO][5078] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8" Nov 12 20:57:11.088930 containerd[1577]: time="2024-11-12T20:57:11.088686834Z" level=info msg="TearDown network for sandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\" successfully" Nov 12 20:57:11.208474 systemd[1]: Started sshd@8-10.128.0.109:22-139.178.89.65:49476.service - OpenSSH per-connection server daemon (139.178.89.65:49476). Nov 12 20:57:11.312330 containerd[1577]: time="2024-11-12T20:57:11.312263923Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:11.312897 containerd[1577]: time="2024-11-12T20:57:11.312736493Z" level=info msg="RemovePodSandbox \"8a385ebb57ba4eda958624489fb9296eb15897265b8a2705f8f59f3f14346ff8\" returns successfully" Nov 12 20:57:11.321143 containerd[1577]: time="2024-11-12T20:57:11.316448791Z" level=info msg="StopPodSandbox for \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\"" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.367 [WARNING][5106] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642", Pod:"calico-apiserver-76cbf9b5bf-pz72g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1d581b10c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.368 [INFO][5106] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.368 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" iface="eth0" netns="" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.368 [INFO][5106] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.368 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.396 [INFO][5112] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.396 [INFO][5112] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.396 [INFO][5112] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.405 [WARNING][5112] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.405 [INFO][5112] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.407 [INFO][5112] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:11.410921 containerd[1577]: 2024-11-12 20:57:11.409 [INFO][5106] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.412200 containerd[1577]: time="2024-11-12T20:57:11.412070736Z" level=info msg="TearDown network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\" successfully" Nov 12 20:57:11.412318 containerd[1577]: time="2024-11-12T20:57:11.412228179Z" level=info msg="StopPodSandbox for \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\" returns successfully" Nov 12 20:57:11.413758 containerd[1577]: time="2024-11-12T20:57:11.412990919Z" level=info msg="RemovePodSandbox for \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\"" Nov 12 20:57:11.413758 containerd[1577]: time="2024-11-12T20:57:11.413041535Z" level=info msg="Forcibly stopping sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\"" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.466 [WARNING][5130] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2176e7e1-d94c-479d-92e3-e9f80e8d0f4d", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"c427d8c8a99a51636d9507624035b804d10679875230bc3d2fd1274ce1d63642", Pod:"calico-apiserver-76cbf9b5bf-pz72g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic1d581b10c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.467 [INFO][5130] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.467 [INFO][5130] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" iface="eth0" netns="" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.467 [INFO][5130] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.467 [INFO][5130] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.494 [INFO][5136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.494 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.494 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.501 [WARNING][5136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.502 [INFO][5136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" HandleID="k8s-pod-network.1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--pz72g-eth0" Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.503 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:11.506258 containerd[1577]: 2024-11-12 20:57:11.505 [INFO][5130] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa" Nov 12 20:57:11.507120 containerd[1577]: time="2024-11-12T20:57:11.506289405Z" level=info msg="TearDown network for sandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\" successfully" Nov 12 20:57:11.521194 sshd[5090]: Accepted publickey for core from 139.178.89.65 port 49476 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:11.523147 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:11.530413 systemd-logind[1559]: New session 9 of user core. Nov 12 20:57:11.535470 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:57:11.824485 sshd[5090]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:11.829064 systemd[1]: sshd@8-10.128.0.109:22-139.178.89.65:49476.service: Deactivated successfully. Nov 12 20:57:11.834010 containerd[1577]: time="2024-11-12T20:57:11.833842702Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:11.834010 containerd[1577]: time="2024-11-12T20:57:11.833940420Z" level=info msg="RemovePodSandbox \"1c6aeae360d92734796a98b497c969103c9cbd81806a4bc8a603e6f917ee8efa\" returns successfully" Nov 12 20:57:11.838504 containerd[1577]: time="2024-11-12T20:57:11.836758369Z" level=info msg="StopPodSandbox for \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\"" Nov 12 20:57:11.838180 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:57:11.838693 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:57:11.841499 systemd-logind[1559]: Removed session 9. Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.886 [WARNING][5167] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"017708f0-c5c9-4372-bddb-a2a7a49fd2e0", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1", Pod:"coredns-76f75df574-r8k87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali793b0fed337", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.886 [INFO][5167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.886 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" iface="eth0" netns="" Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.886 [INFO][5167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.887 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.913 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.913 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.913 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.922 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.922 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.924 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:11.926667 containerd[1577]: 2024-11-12 20:57:11.925 [INFO][5167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:11.926667 containerd[1577]: time="2024-11-12T20:57:11.926630623Z" level=info msg="TearDown network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\" successfully" Nov 12 20:57:11.926667 containerd[1577]: time="2024-11-12T20:57:11.926667786Z" level=info msg="StopPodSandbox for \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\" returns successfully" Nov 12 20:57:11.927798 containerd[1577]: time="2024-11-12T20:57:11.927721657Z" level=info msg="RemovePodSandbox for \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\"" Nov 12 20:57:11.927798 containerd[1577]: time="2024-11-12T20:57:11.927772784Z" level=info msg="Forcibly stopping sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\"" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:11.972 [WARNING][5192] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"017708f0-c5c9-4372-bddb-a2a7a49fd2e0", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"ee9eaf4eb692fc262d16146bfd65d905ce143aaa378c01686c6796251a5896d1", Pod:"coredns-76f75df574-r8k87", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali793b0fed337", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:11.972 [INFO][5192] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:11.972 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" iface="eth0" netns="" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:11.972 [INFO][5192] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:11.972 [INFO][5192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:12.001 [INFO][5198] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:12.001 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:12.001 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:12.007 [WARNING][5198] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:12.008 [INFO][5198] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" HandleID="k8s-pod-network.dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-coredns--76f75df574--r8k87-eth0" Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:12.009 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.012160 containerd[1577]: 2024-11-12 20:57:12.010 [INFO][5192] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5" Nov 12 20:57:12.013420 containerd[1577]: time="2024-11-12T20:57:12.012180862Z" level=info msg="TearDown network for sandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\" successfully" Nov 12 20:57:12.017045 containerd[1577]: time="2024-11-12T20:57:12.016996707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:12.017248 containerd[1577]: time="2024-11-12T20:57:12.017081961Z" level=info msg="RemovePodSandbox \"dbc493c23dc98d1883432e7b999c1ea55bc677d8db22e0b9133c2a042fbaf0e5\" returns successfully" Nov 12 20:57:12.017847 containerd[1577]: time="2024-11-12T20:57:12.017710067Z" level=info msg="StopPodSandbox for \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\"" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.064 [WARNING][5216] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"43370d02-66a8-4ab1-8864-281286226360", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b", Pod:"calico-apiserver-76cbf9b5bf-sf428", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali513f4a92c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.064 [INFO][5216] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.064 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" iface="eth0" netns="" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.064 [INFO][5216] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.064 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.094 [INFO][5222] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.094 [INFO][5222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.094 [INFO][5222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.102 [WARNING][5222] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.102 [INFO][5222] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.104 [INFO][5222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.106963 containerd[1577]: 2024-11-12 20:57:12.105 [INFO][5216] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.106963 containerd[1577]: time="2024-11-12T20:57:12.106856097Z" level=info msg="TearDown network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\" successfully" Nov 12 20:57:12.106963 containerd[1577]: time="2024-11-12T20:57:12.106889657Z" level=info msg="StopPodSandbox for \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\" returns successfully" Nov 12 20:57:12.109190 containerd[1577]: time="2024-11-12T20:57:12.109056183Z" level=info msg="RemovePodSandbox for \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\"" Nov 12 20:57:12.109190 containerd[1577]: time="2024-11-12T20:57:12.109112757Z" level=info msg="Forcibly stopping sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\"" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.155 [WARNING][5240] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0", GenerateName:"calico-apiserver-76cbf9b5bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"43370d02-66a8-4ab1-8864-281286226360", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76cbf9b5bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"c0297d509d0a77973a405aedf09b975855c9fcd4c21337ce83d857341654f43b", Pod:"calico-apiserver-76cbf9b5bf-sf428", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali513f4a92c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.155 [INFO][5240] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.155 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" iface="eth0" netns="" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.155 [INFO][5240] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.155 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.183 [INFO][5246] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.183 [INFO][5246] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.183 [INFO][5246] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.190 [WARNING][5246] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.190 [INFO][5246] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" HandleID="k8s-pod-network.8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--apiserver--76cbf9b5bf--sf428-eth0" Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.191 [INFO][5246] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.194144 containerd[1577]: 2024-11-12 20:57:12.192 [INFO][5240] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c" Nov 12 20:57:12.195467 containerd[1577]: time="2024-11-12T20:57:12.194198420Z" level=info msg="TearDown network for sandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\" successfully" Nov 12 20:57:12.199201 containerd[1577]: time="2024-11-12T20:57:12.199127053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:12.199389 containerd[1577]: time="2024-11-12T20:57:12.199215041Z" level=info msg="RemovePodSandbox \"8d9e98b7510828c59b080956f6da02e08f0635913b449c536e70dc84a9a4ee1c\" returns successfully" Nov 12 20:57:12.200059 containerd[1577]: time="2024-11-12T20:57:12.200025045Z" level=info msg="StopPodSandbox for \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\"" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.250 [WARNING][5264] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0", GenerateName:"calico-kube-controllers-54d5d9c55f-", Namespace:"calico-system", SelfLink:"", UID:"1f661fac-f550-4093-a121-8425e9897475", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d5d9c55f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73", Pod:"calico-kube-controllers-54d5d9c55f-qv98l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21840cf311a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.251 [INFO][5264] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.251 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" iface="eth0" netns="" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.251 [INFO][5264] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.251 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.277 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.278 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.278 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.288 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.288 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.290 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.292977 containerd[1577]: 2024-11-12 20:57:12.291 [INFO][5264] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.293993 containerd[1577]: time="2024-11-12T20:57:12.293024724Z" level=info msg="TearDown network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\" successfully" Nov 12 20:57:12.293993 containerd[1577]: time="2024-11-12T20:57:12.293057855Z" level=info msg="StopPodSandbox for \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\" returns successfully" Nov 12 20:57:12.293993 containerd[1577]: time="2024-11-12T20:57:12.293792254Z" level=info msg="RemovePodSandbox for \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\"" Nov 12 20:57:12.293993 containerd[1577]: time="2024-11-12T20:57:12.293859529Z" level=info msg="Forcibly stopping sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\"" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.338 [WARNING][5289] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0", GenerateName:"calico-kube-controllers-54d5d9c55f-", Namespace:"calico-system", SelfLink:"", UID:"1f661fac-f550-4093-a121-8425e9897475", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d5d9c55f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"748d034b895655dcb218ae29461813a6296b7c3575dd6d352ac2e72285763f73", Pod:"calico-kube-controllers-54d5d9c55f-qv98l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali21840cf311a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.339 [INFO][5289] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.339 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" iface="eth0" netns="" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.339 [INFO][5289] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.339 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.363 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.364 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.364 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.373 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.373 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" HandleID="k8s-pod-network.3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-calico--kube--controllers--54d5d9c55f--qv98l-eth0" Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.374 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.377391 containerd[1577]: 2024-11-12 20:57:12.376 [INFO][5289] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c" Nov 12 20:57:12.377391 containerd[1577]: time="2024-11-12T20:57:12.377347107Z" level=info msg="TearDown network for sandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\" successfully" Nov 12 20:57:12.382416 containerd[1577]: time="2024-11-12T20:57:12.382360260Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:12.382647 containerd[1577]: time="2024-11-12T20:57:12.382445104Z" level=info msg="RemovePodSandbox \"3a6b979eb7dfc31129d51c7c4c8557b26b48c9fa496fb286e11ebe9d36078f7c\" returns successfully" Nov 12 20:57:12.383052 containerd[1577]: time="2024-11-12T20:57:12.383022484Z" level=info msg="StopPodSandbox for \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\"" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.426 [WARNING][5314] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ebb665c-7489-46df-9cad-fdce94e5d49a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5", Pod:"csi-node-driver-54r88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia725673251e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.426 [INFO][5314] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.426 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" iface="eth0" netns="" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.426 [INFO][5314] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.426 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.451 [INFO][5321] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.452 [INFO][5321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.452 [INFO][5321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.460 [WARNING][5321] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.462 [INFO][5321] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.468 [INFO][5321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.475435 containerd[1577]: 2024-11-12 20:57:12.472 [INFO][5314] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.476598 containerd[1577]: time="2024-11-12T20:57:12.476147592Z" level=info msg="TearDown network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\" successfully" Nov 12 20:57:12.476598 containerd[1577]: time="2024-11-12T20:57:12.476187212Z" level=info msg="StopPodSandbox for \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\" returns successfully" Nov 12 20:57:12.476958 containerd[1577]: time="2024-11-12T20:57:12.476847357Z" level=info msg="RemovePodSandbox for \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\"" Nov 12 20:57:12.476958 containerd[1577]: time="2024-11-12T20:57:12.476890847Z" level=info msg="Forcibly stopping sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\"" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.518 [WARNING][5340] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ebb665c-7489-46df-9cad-fdce94e5d49a", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-366b2cca8b381e5feeb3.c.flatcar-212911.internal", ContainerID:"707f9f65c16adc4379bed9b1a69882b42baffcdc988a49d0e74169f2b89e83c5", Pod:"csi-node-driver-54r88", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia725673251e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.518 [INFO][5340] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.518 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" iface="eth0" netns="" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.518 [INFO][5340] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.518 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.547 [INFO][5346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.547 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.547 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.555 [WARNING][5346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.555 [INFO][5346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" HandleID="k8s-pod-network.9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Workload="ci--4081--2--0--366b2cca8b381e5feeb3.c.flatcar--212911.internal-k8s-csi--node--driver--54r88-eth0" Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.556 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.559248 containerd[1577]: 2024-11-12 20:57:12.557 [INFO][5340] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0" Nov 12 20:57:12.560391 containerd[1577]: time="2024-11-12T20:57:12.559255588Z" level=info msg="TearDown network for sandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\" successfully" Nov 12 20:57:12.564039 containerd[1577]: time="2024-11-12T20:57:12.563974347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:12.564228 containerd[1577]: time="2024-11-12T20:57:12.564059685Z" level=info msg="RemovePodSandbox \"9f526b98ef45f5de22b4cfcfb8c40687a8d0a62be4e0bc52899da8c106c218e0\" returns successfully" Nov 12 20:57:16.872437 systemd[1]: Started sshd@9-10.128.0.109:22-139.178.89.65:49488.service - OpenSSH per-connection server daemon (139.178.89.65:49488). Nov 12 20:57:17.166005 sshd[5378]: Accepted publickey for core from 139.178.89.65 port 49488 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:17.167850 sshd[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:17.173901 systemd-logind[1559]: New session 10 of user core. Nov 12 20:57:17.178568 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:57:17.459546 sshd[5378]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:17.466143 systemd[1]: sshd@9-10.128.0.109:22-139.178.89.65:49488.service: Deactivated successfully. Nov 12 20:57:17.470960 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:57:17.472523 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:57:17.474003 systemd-logind[1559]: Removed session 10. Nov 12 20:57:17.507455 systemd[1]: Started sshd@10-10.128.0.109:22-139.178.89.65:34632.service - OpenSSH per-connection server daemon (139.178.89.65:34632). Nov 12 20:57:17.796808 sshd[5393]: Accepted publickey for core from 139.178.89.65 port 34632 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:17.799009 sshd[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:17.805160 systemd-logind[1559]: New session 11 of user core. Nov 12 20:57:17.810448 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:57:18.127955 sshd[5393]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:18.132739 systemd[1]: sshd@10-10.128.0.109:22-139.178.89.65:34632.service: Deactivated successfully. Nov 12 20:57:18.138280 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:57:18.140036 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:57:18.142276 systemd-logind[1559]: Removed session 11. Nov 12 20:57:18.177525 systemd[1]: Started sshd@11-10.128.0.109:22-139.178.89.65:34646.service - OpenSSH per-connection server daemon (139.178.89.65:34646). Nov 12 20:57:18.464729 sshd[5405]: Accepted publickey for core from 139.178.89.65 port 34646 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:18.466739 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:18.473394 systemd-logind[1559]: New session 12 of user core. Nov 12 20:57:18.478485 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:57:18.766601 sshd[5405]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:18.773489 systemd[1]: sshd@11-10.128.0.109:22-139.178.89.65:34646.service: Deactivated successfully. Nov 12 20:57:18.778732 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:57:18.779320 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:57:18.781531 systemd-logind[1559]: Removed session 12. Nov 12 20:57:23.816942 systemd[1]: Started sshd@12-10.128.0.109:22-139.178.89.65:34650.service - OpenSSH per-connection server daemon (139.178.89.65:34650). Nov 12 20:57:24.118031 sshd[5423]: Accepted publickey for core from 139.178.89.65 port 34650 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:24.120062 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:24.129177 systemd-logind[1559]: New session 13 of user core. Nov 12 20:57:24.137921 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:57:24.463019 sshd[5423]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:24.467930 systemd[1]: sshd@12-10.128.0.109:22-139.178.89.65:34650.service: Deactivated successfully. Nov 12 20:57:24.474158 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:57:24.474959 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:57:24.477044 systemd-logind[1559]: Removed session 13. Nov 12 20:57:25.577138 kubelet[2780]: I1112 20:57:25.576317 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:29.511908 systemd[1]: Started sshd@13-10.128.0.109:22-139.178.89.65:33792.service - OpenSSH per-connection server daemon (139.178.89.65:33792). Nov 12 20:57:29.803312 sshd[5445]: Accepted publickey for core from 139.178.89.65 port 33792 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:29.805337 sshd[5445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:29.812010 systemd-logind[1559]: New session 14 of user core. Nov 12 20:57:29.816549 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:57:30.089951 sshd[5445]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:30.096543 systemd[1]: sshd@13-10.128.0.109:22-139.178.89.65:33792.service: Deactivated successfully. Nov 12 20:57:30.101972 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:57:30.103551 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:57:30.104975 systemd-logind[1559]: Removed session 14. Nov 12 20:57:35.138910 systemd[1]: Started sshd@14-10.128.0.109:22-139.178.89.65:33798.service - OpenSSH per-connection server daemon (139.178.89.65:33798). Nov 12 20:57:35.430867 sshd[5465]: Accepted publickey for core from 139.178.89.65 port 33798 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:35.432880 sshd[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:35.439340 systemd-logind[1559]: New session 15 of user core. Nov 12 20:57:35.444446 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:57:35.722621 sshd[5465]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:35.728943 systemd[1]: sshd@14-10.128.0.109:22-139.178.89.65:33798.service: Deactivated successfully. Nov 12 20:57:35.736062 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:57:35.736575 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:57:35.739843 systemd-logind[1559]: Removed session 15. Nov 12 20:57:37.512715 systemd[1]: run-containerd-runc-k8s.io-4fe7691769d3263710397782c690d6599594d54286a543184c60b07cf76f5c17-runc.W6fp2e.mount: Deactivated successfully. Nov 12 20:57:40.774532 systemd[1]: Started sshd@15-10.128.0.109:22-139.178.89.65:50372.service - OpenSSH per-connection server daemon (139.178.89.65:50372). Nov 12 20:57:41.056635 sshd[5503]: Accepted publickey for core from 139.178.89.65 port 50372 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:41.058772 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:41.065433 systemd-logind[1559]: New session 16 of user core. Nov 12 20:57:41.069796 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:57:41.345314 sshd[5503]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:41.350902 systemd[1]: sshd@15-10.128.0.109:22-139.178.89.65:50372.service: Deactivated successfully. Nov 12 20:57:41.357613 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:57:41.357815 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:57:41.359894 systemd-logind[1559]: Removed session 16. Nov 12 20:57:41.394437 systemd[1]: Started sshd@16-10.128.0.109:22-139.178.89.65:50376.service - OpenSSH per-connection server daemon (139.178.89.65:50376). Nov 12 20:57:41.684266 sshd[5517]: Accepted publickey for core from 139.178.89.65 port 50376 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:41.686140 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:41.693013 systemd-logind[1559]: New session 17 of user core. Nov 12 20:57:41.697516 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:57:42.036064 sshd[5517]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:42.040836 systemd[1]: sshd@16-10.128.0.109:22-139.178.89.65:50376.service: Deactivated successfully. Nov 12 20:57:42.048187 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:57:42.048369 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:57:42.050477 systemd-logind[1559]: Removed session 17. Nov 12 20:57:42.084453 systemd[1]: Started sshd@17-10.128.0.109:22-139.178.89.65:50392.service - OpenSSH per-connection server daemon (139.178.89.65:50392). Nov 12 20:57:42.370659 sshd[5529]: Accepted publickey for core from 139.178.89.65 port 50392 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:42.372642 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:42.378547 systemd-logind[1559]: New session 18 of user core. Nov 12 20:57:42.384889 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:57:43.306356 systemd[1]: run-containerd-runc-k8s.io-fea9c18a28ec4ffc5b396ec346e8fde888037c650fc1f01ad89cdd00e0b8f962-runc.S2nTis.mount: Deactivated successfully. Nov 12 20:57:44.547036 sshd[5529]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:44.553248 systemd[1]: sshd@17-10.128.0.109:22-139.178.89.65:50392.service: Deactivated successfully. Nov 12 20:57:44.560325 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:57:44.561389 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:57:44.563395 systemd-logind[1559]: Removed session 18. Nov 12 20:57:44.596416 systemd[1]: Started sshd@18-10.128.0.109:22-139.178.89.65:50396.service - OpenSSH per-connection server daemon (139.178.89.65:50396). Nov 12 20:57:44.885560 sshd[5565]: Accepted publickey for core from 139.178.89.65 port 50396 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:44.887818 sshd[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:44.894902 systemd-logind[1559]: New session 19 of user core. Nov 12 20:57:44.900823 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:57:45.391365 sshd[5565]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:45.396292 systemd[1]: sshd@18-10.128.0.109:22-139.178.89.65:50396.service: Deactivated successfully. Nov 12 20:57:45.402326 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:57:45.404416 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:57:45.405997 systemd-logind[1559]: Removed session 19. Nov 12 20:57:45.441765 systemd[1]: Started sshd@19-10.128.0.109:22-139.178.89.65:50404.service - OpenSSH per-connection server daemon (139.178.89.65:50404). Nov 12 20:57:45.728938 sshd[5577]: Accepted publickey for core from 139.178.89.65 port 50404 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:45.730889 sshd[5577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:45.737289 systemd-logind[1559]: New session 20 of user core. Nov 12 20:57:45.743553 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:57:46.015478 sshd[5577]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:46.020878 systemd[1]: sshd@19-10.128.0.109:22-139.178.89.65:50404.service: Deactivated successfully. Nov 12 20:57:46.027187 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:57:46.028235 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:57:46.029961 systemd-logind[1559]: Removed session 20. Nov 12 20:57:51.061781 systemd[1]: Started sshd@20-10.128.0.109:22-139.178.89.65:46866.service - OpenSSH per-connection server daemon (139.178.89.65:46866). Nov 12 20:57:51.358789 sshd[5591]: Accepted publickey for core from 139.178.89.65 port 46866 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:51.360760 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:51.366737 systemd-logind[1559]: New session 21 of user core. Nov 12 20:57:51.372844 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:57:51.641910 sshd[5591]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:51.647247 systemd[1]: sshd@20-10.128.0.109:22-139.178.89.65:46866.service: Deactivated successfully. Nov 12 20:57:51.654549 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:57:51.655557 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:57:51.657513 systemd-logind[1559]: Removed session 21. Nov 12 20:57:56.690973 systemd[1]: Started sshd@21-10.128.0.109:22-139.178.89.65:46872.service - OpenSSH per-connection server daemon (139.178.89.65:46872). Nov 12 20:57:56.981932 sshd[5610]: Accepted publickey for core from 139.178.89.65 port 46872 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:57:56.984064 sshd[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:56.989934 systemd-logind[1559]: New session 22 of user core. Nov 12 20:57:56.996451 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:57:57.262243 sshd[5610]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:57.267767 systemd[1]: sshd@21-10.128.0.109:22-139.178.89.65:46872.service: Deactivated successfully. Nov 12 20:57:57.274916 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:57:57.275865 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:57:57.278613 systemd-logind[1559]: Removed session 22. Nov 12 20:58:02.311480 systemd[1]: Started sshd@22-10.128.0.109:22-139.178.89.65:46360.service - OpenSSH per-connection server daemon (139.178.89.65:46360). Nov 12 20:58:02.612856 sshd[5623]: Accepted publickey for core from 139.178.89.65 port 46360 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:58:02.614758 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:02.621538 systemd-logind[1559]: New session 23 of user core. Nov 12 20:58:02.625475 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:58:02.900534 sshd[5623]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:02.905826 systemd[1]: sshd@22-10.128.0.109:22-139.178.89.65:46360.service: Deactivated successfully. Nov 12 20:58:02.912677 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:58:02.915615 systemd-logind[1559]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:58:02.917457 systemd-logind[1559]: Removed session 23. Nov 12 20:58:07.951415 systemd[1]: Started sshd@23-10.128.0.109:22-139.178.89.65:33770.service - OpenSSH per-connection server daemon (139.178.89.65:33770). Nov 12 20:58:08.244880 sshd[5678]: Accepted publickey for core from 139.178.89.65 port 33770 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:58:08.246911 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:08.254359 systemd-logind[1559]: New session 24 of user core. Nov 12 20:58:08.265143 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:58:08.549435 sshd[5678]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:08.558747 systemd[1]: sshd@23-10.128.0.109:22-139.178.89.65:33770.service: Deactivated successfully. Nov 12 20:58:08.564624 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:58:08.565964 systemd-logind[1559]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:58:08.567622 systemd-logind[1559]: Removed session 24.