Nov 12 20:46:43.128473 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:46:43.128515 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:46:43.128533 kernel: BIOS-provided physical RAM map: Nov 12 20:46:43.128548 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 12 20:46:43.128561 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 12 20:46:43.128574 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 12 20:46:43.128610 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 12 20:46:43.128628 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 12 20:46:43.128857 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 12 20:46:43.128876 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 12 20:46:43.128891 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 12 20:46:43.128905 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 12 20:46:43.128920 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 12 20:46:43.128935 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 12 20:46:43.128960 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 12 20:46:43.128975 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 12 20:46:43.128992 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 12 20:46:43.129008 kernel: NX (Execute Disable) protection: active Nov 12 20:46:43.129025 kernel: APIC: Static calls initialized Nov 12 20:46:43.129042 kernel: efi: EFI v2.7 by EDK II Nov 12 20:46:43.129059 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Nov 12 20:46:43.129076 kernel: SMBIOS 2.4 present. Nov 12 20:46:43.129093 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Nov 12 20:46:43.129109 kernel: Hypervisor detected: KVM Nov 12 20:46:43.129128 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:46:43.129143 kernel: kvm-clock: using sched offset of 12149096538 cycles Nov 12 20:46:43.129159 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:46:43.129175 kernel: tsc: Detected 2299.998 MHz processor Nov 12 20:46:43.129192 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:46:43.129209 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:46:43.129233 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 12 20:46:43.129250 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 12 20:46:43.129267 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:46:43.129285 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 12 20:46:43.129300 kernel: Using GB pages for direct mapping Nov 12 20:46:43.129315 kernel: Secure boot disabled Nov 12 20:46:43.129331 kernel: ACPI: Early table checksum verification disabled Nov 12 20:46:43.129345 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 12 20:46:43.129361 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 12 20:46:43.129379 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 12 20:46:43.129403 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 12 20:46:43.129425 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 12 20:46:43.129443 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Nov 12 20:46:43.129461 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 12 20:46:43.129479 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 12 20:46:43.129498 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 12 20:46:43.129516 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 12 20:46:43.129536 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 12 20:46:43.129553 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 12 20:46:43.129569 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 12 20:46:43.129585 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 12 20:46:43.129603 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 12 20:46:43.129620 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 12 20:46:43.129638 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 12 20:46:43.130704 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 12 20:46:43.130723 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 12 20:46:43.130749 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 12 20:46:43.130767 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:46:43.130784 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:46:43.130802 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:46:43.130820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 12 20:46:43.130838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 12 20:46:43.130856 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 12 20:46:43.130875 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 12 20:46:43.130893 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 12 20:46:43.130915 kernel: Zone ranges: Nov 12 20:46:43.130933 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:46:43.130951 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:46:43.130968 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:46:43.130985 kernel: Movable zone start for each node Nov 12 20:46:43.131003 kernel: Early memory node ranges Nov 12 20:46:43.131020 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 12 20:46:43.131038 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 12 20:46:43.131056 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 12 20:46:43.131079 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 12 20:46:43.131096 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:46:43.131113 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 12 20:46:43.131131 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:46:43.131149 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 12 20:46:43.131167 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 12 20:46:43.131184 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 12 20:46:43.131202 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 12 20:46:43.131229 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 12 20:46:43.131252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:46:43.131271 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:46:43.131289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:46:43.131305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:46:43.131323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:46:43.131340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:46:43.131358 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:46:43.131376 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:46:43.131393 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:46:43.131415 kernel: Booting paravirtualized kernel on KVM Nov 12 20:46:43.131432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:46:43.131448 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:46:43.131465 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:46:43.131483 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:46:43.131498 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:46:43.131514 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:46:43.131532 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:46:43.131551 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:46:43.131574 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:46:43.131590 kernel: random: crng init done Nov 12 20:46:43.131607 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:46:43.131624 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:46:43.131656 kernel: Fallback order for Node 0: 0 Nov 12 20:46:43.131674 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 12 20:46:43.131691 kernel: Policy zone: Normal Nov 12 20:46:43.131707 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:46:43.131730 kernel: software IO TLB: area num 2. Nov 12 20:46:43.131749 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 346940K reserved, 0K cma-reserved) Nov 12 20:46:43.131768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:46:43.131786 kernel: Kernel/User page tables isolation: enabled Nov 12 20:46:43.131805 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:46:43.131823 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:46:43.131841 kernel: Dynamic Preempt: voluntary Nov 12 20:46:43.131860 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:46:43.131880 kernel: rcu: RCU event tracing is enabled. Nov 12 20:46:43.131920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:46:43.131939 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:46:43.131959 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:46:43.131981 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:46:43.132001 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:46:43.132020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:46:43.132039 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:46:43.132059 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:46:43.132078 kernel: Console: colour dummy device 80x25 Nov 12 20:46:43.132100 kernel: printk: console [ttyS0] enabled Nov 12 20:46:43.132118 kernel: ACPI: Core revision 20230628 Nov 12 20:46:43.132136 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:46:43.132154 kernel: x2apic enabled Nov 12 20:46:43.132173 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:46:43.132190 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 12 20:46:43.132208 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:46:43.132237 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 12 20:46:43.132259 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 12 20:46:43.132276 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 12 20:46:43.132294 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:46:43.132312 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 20:46:43.132329 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 20:46:43.132347 kernel: Spectre V2 : Mitigation: IBRS Nov 12 20:46:43.132365 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:46:43.132383 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:46:43.132401 kernel: RETBleed: Mitigation: IBRS Nov 12 20:46:43.132424 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:46:43.132441 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 12 20:46:43.132460 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:46:43.132479 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:46:43.132499 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:46:43.132518 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:46:43.132536 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:46:43.132554 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:46:43.132573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:46:43.132595 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:46:43.132613 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:46:43.132630 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:46:43.134719 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:46:43.134745 kernel: landlock: Up and running. Nov 12 20:46:43.134766 kernel: SELinux: Initializing. Nov 12 20:46:43.134785 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.134804 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.134824 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 12 20:46:43.134849 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:46:43.134869 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:46:43.134889 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:46:43.134909 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 12 20:46:43.134928 kernel: signal: max sigframe size: 1776 Nov 12 20:46:43.134947 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:46:43.134968 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:46:43.134986 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:46:43.135005 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:46:43.135029 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:46:43.135049 kernel: .... node #0, CPUs: #1 Nov 12 20:46:43.135069 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 12 20:46:43.135090 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:46:43.135109 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:46:43.135128 kernel: smpboot: Max logical packages: 1 Nov 12 20:46:43.135148 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 12 20:46:43.135168 kernel: devtmpfs: initialized Nov 12 20:46:43.135191 kernel: x86/mm: Memory block size: 128MB Nov 12 20:46:43.135211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 12 20:46:43.135238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:46:43.135257 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:46:43.135276 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:46:43.135296 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:46:43.135315 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:46:43.135335 kernel: audit: type=2000 audit(1731444401.980:1): state=initialized audit_enabled=0 res=1 Nov 12 20:46:43.135354 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:46:43.135377 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:46:43.135396 kernel: cpuidle: using governor menu Nov 12 20:46:43.135415 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:46:43.135434 kernel: dca service started, version 1.12.1 Nov 12 20:46:43.135454 kernel: PCI: Using configuration type 1 for base access Nov 12 20:46:43.135473 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:46:43.135494 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:46:43.135513 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:46:43.135532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:46:43.135555 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:46:43.135575 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:46:43.135594 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:46:43.135613 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:46:43.135633 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:46:43.136691 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 12 20:46:43.136715 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:46:43.136734 kernel: ACPI: Interpreter enabled Nov 12 20:46:43.136752 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:46:43.136777 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:46:43.136795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:46:43.136814 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:46:43.136829 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 12 20:46:43.136846 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:46:43.137130 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:46:43.137345 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:46:43.137540 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:46:43.137565 kernel: PCI host bridge to bus 0000:00 Nov 12 20:46:43.139728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:46:43.139915 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:46:43.140087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:46:43.140255 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 12 20:46:43.140421 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:46:43.140628 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:46:43.140852 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 12 20:46:43.141045 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:46:43.141237 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 12 20:46:43.141432 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 12 20:46:43.141615 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 12 20:46:43.142899 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 12 20:46:43.143119 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:46:43.143338 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 12 20:46:43.143530 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 12 20:46:43.144863 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:46:43.145068 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 12 20:46:43.145263 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 12 20:46:43.145296 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:46:43.145317 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:46:43.145337 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:46:43.145357 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:46:43.145377 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:46:43.145397 kernel: iommu: Default domain type: Translated Nov 12 20:46:43.145417 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:46:43.145436 kernel: efivars: Registered efivars operations Nov 12 20:46:43.145457 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:46:43.145480 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:46:43.145500 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 12 20:46:43.145520 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 12 20:46:43.145539 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 12 20:46:43.145558 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 12 20:46:43.145577 kernel: vgaarb: loaded Nov 12 20:46:43.145597 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:46:43.145617 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:46:43.145637 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:46:43.147713 kernel: pnp: PnP ACPI init Nov 12 20:46:43.147733 kernel: pnp: PnP ACPI: found 7 devices Nov 12 20:46:43.147754 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:46:43.147774 kernel: NET: Registered PF_INET protocol family Nov 12 20:46:43.147794 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:46:43.147814 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:46:43.147834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:46:43.147854 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:46:43.147873 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:46:43.147897 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:46:43.147917 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.147936 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.147956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:46:43.147975 kernel: NET: Registered PF_XDP protocol family Nov 12 20:46:43.148177 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:46:43.148353 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:46:43.148518 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:46:43.149745 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 12 20:46:43.149974 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:46:43.150003 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:46:43.150025 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:46:43.150045 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 12 20:46:43.150066 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:46:43.150087 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:46:43.150107 kernel: clocksource: Switched to clocksource tsc Nov 12 20:46:43.150134 kernel: Initialise system trusted keyrings Nov 12 20:46:43.150154 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:46:43.150174 kernel: Key type asymmetric registered Nov 12 20:46:43.150194 kernel: Asymmetric key parser 'x509' registered Nov 12 20:46:43.150213 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:46:43.150239 kernel: io scheduler mq-deadline registered Nov 12 20:46:43.150259 kernel: io scheduler kyber registered Nov 12 20:46:43.150278 kernel: io scheduler bfq registered Nov 12 20:46:43.150298 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:46:43.150323 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:46:43.150511 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 12 20:46:43.150537 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 12 20:46:43.151702 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 12 20:46:43.151736 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:46:43.151930 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 12 20:46:43.151956 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:46:43.151975 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:46:43.151995 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:46:43.152022 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 12 20:46:43.152042 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 12 20:46:43.152245 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 12 20:46:43.152272 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:46:43.152293 kernel: i8042: Warning: Keylock active Nov 12 20:46:43.152312 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:46:43.152332 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:46:43.152522 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 12 20:46:43.152794 kernel: rtc_cmos 00:00: registered as rtc0 Nov 12 20:46:43.152971 kernel: rtc_cmos 00:00: setting system clock to 2024-11-12T20:46:42 UTC (1731444402) Nov 12 20:46:43.153144 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 12 20:46:43.153169 kernel: intel_pstate: CPU model not supported Nov 12 20:46:43.153189 kernel: pstore: Using crash dump compression: deflate Nov 12 20:46:43.153209 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:46:43.153237 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:46:43.153257 kernel: Segment Routing with IPv6 Nov 12 20:46:43.153282 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:46:43.153302 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:46:43.153322 kernel: Key type dns_resolver registered Nov 12 20:46:43.153341 kernel: IPI shorthand broadcast: enabled Nov 12 20:46:43.153361 kernel: sched_clock: Marking stable (888004160, 132986921)->(1079361581, -58370500) Nov 12 20:46:43.153380 kernel: registered taskstats version 1 Nov 12 20:46:43.153400 kernel: Loading compiled-in X.509 certificates Nov 12 20:46:43.153420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:46:43.153439 kernel: Key type .fscrypt registered Nov 12 20:46:43.153462 kernel: Key type fscrypt-provisioning registered Nov 12 20:46:43.153481 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:46:43.153501 kernel: ima: No architecture policies found Nov 12 20:46:43.153521 kernel: clk: Disabling unused clocks Nov 12 20:46:43.153541 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:46:43.153560 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:46:43.153579 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:46:43.153600 kernel: Run /init as init process Nov 12 20:46:43.153624 kernel: with arguments: Nov 12 20:46:43.153662 kernel: /init Nov 12 20:46:43.153680 kernel: with environment: Nov 12 20:46:43.153699 kernel: HOME=/ Nov 12 20:46:43.153718 kernel: TERM=linux Nov 12 20:46:43.153737 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:46:43.153757 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:46:43.153782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:46:43.153813 systemd[1]: Detected virtualization google. Nov 12 20:46:43.153835 systemd[1]: Detected architecture x86-64. Nov 12 20:46:43.153855 systemd[1]: Running in initrd. Nov 12 20:46:43.153876 systemd[1]: No hostname configured, using default hostname. Nov 12 20:46:43.153897 systemd[1]: Hostname set to . Nov 12 20:46:43.153919 systemd[1]: Initializing machine ID from random generator. Nov 12 20:46:43.153940 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:46:43.153962 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:46:43.153988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:46:43.154011 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:46:43.154033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:46:43.154054 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:46:43.154076 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:46:43.154101 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:46:43.154128 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:46:43.154150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:46:43.154172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:46:43.154224 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:46:43.154252 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:46:43.154274 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:46:43.154297 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:46:43.154324 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:46:43.154347 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:46:43.154369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:46:43.154392 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:46:43.154415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:46:43.154437 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:46:43.154460 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:46:43.154482 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:46:43.154509 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:46:43.154532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:46:43.154554 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:46:43.154577 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:46:43.154599 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:46:43.154622 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:46:43.154668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:46:43.154691 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:46:43.154714 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:46:43.154785 systemd-journald[183]: Collecting audit messages is disabled. Nov 12 20:46:43.154835 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:46:43.154865 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:46:43.154889 systemd-journald[183]: Journal started Nov 12 20:46:43.154933 systemd-journald[183]: Runtime Journal (/run/log/journal/27e8d3488d644e15aa1904459c0cd899) is 8.0M, max 148.7M, 140.7M free. Nov 12 20:46:43.126786 systemd-modules-load[184]: Inserted module 'overlay' Nov 12 20:46:43.163838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:43.170671 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:46:43.175466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:46:43.182727 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:46:43.185737 kernel: Bridge firewalling registered Nov 12 20:46:43.185628 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 12 20:46:43.187896 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:46:43.195888 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:46:43.204867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:46:43.211169 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:46:43.224859 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:46:43.232891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:46:43.240251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:46:43.253215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:43.263070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:46:43.266904 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:46:43.279870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:46:43.296257 dracut-cmdline[216]: dracut-dracut-053 Nov 12 20:46:43.300721 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:46:43.346011 systemd-resolved[217]: Positive Trust Anchors: Nov 12 20:46:43.346611 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:46:43.346848 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:46:43.356351 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 12 20:46:43.360330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:46:43.376930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:46:43.407688 kernel: SCSI subsystem initialized Nov 12 20:46:43.418695 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:46:43.430679 kernel: iscsi: registered transport (tcp) Nov 12 20:46:43.454675 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:46:43.454768 kernel: QLogic iSCSI HBA Driver Nov 12 20:46:43.510933 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:46:43.517037 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:46:43.559820 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:46:43.559910 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:46:43.559938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:46:43.607687 kernel: raid6: avx2x4 gen() 17717 MB/s Nov 12 20:46:43.624689 kernel: raid6: avx2x2 gen() 17667 MB/s Nov 12 20:46:43.642225 kernel: raid6: avx2x1 gen() 13794 MB/s Nov 12 20:46:43.642298 kernel: raid6: using algorithm avx2x4 gen() 17717 MB/s Nov 12 20:46:43.660370 kernel: raid6: .... xor() 7721 MB/s, rmw enabled Nov 12 20:46:43.660433 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:46:43.683708 kernel: xor: automatically using best checksumming function avx Nov 12 20:46:43.860682 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:46:43.874481 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:46:43.880880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:46:43.910263 systemd-udevd[400]: Using default interface naming scheme 'v255'. Nov 12 20:46:43.918134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:46:43.927148 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:46:43.963105 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 12 20:46:44.001190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:46:44.014918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:46:44.095382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:46:44.105899 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:46:44.149912 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:46:44.160921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:46:44.170774 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:46:44.177041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:46:44.191049 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:46:44.220676 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:46:44.240740 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 12 20:46:44.268295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:46:44.296903 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:46:44.317069 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:46:44.317144 kernel: AES CTR mode by8 optimization enabled Nov 12 20:46:44.336246 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:46:44.336714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:44.348986 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:46:44.352686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:46:44.353825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:44.370218 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Nov 12 20:46:44.386782 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 12 20:46:44.387047 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 12 20:46:44.387293 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 12 20:46:44.387543 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 20:46:44.387798 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:46:44.387827 kernel: GPT:17805311 != 25165823 Nov 12 20:46:44.387852 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:46:44.387878 kernel: GPT:17805311 != 25165823 Nov 12 20:46:44.387901 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:46:44.387926 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:44.387952 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 12 20:46:44.370345 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:46:44.384451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:46:44.418561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:44.454670 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (450) Nov 12 20:46:44.462667 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (453) Nov 12 20:46:44.467964 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 12 20:46:44.483916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 12 20:46:44.496632 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 12 20:46:44.507976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 12 20:46:44.508284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 12 20:46:44.523937 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:46:44.531200 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:46:44.545072 disk-uuid[540]: Primary Header is updated. Nov 12 20:46:44.545072 disk-uuid[540]: Secondary Entries is updated. Nov 12 20:46:44.545072 disk-uuid[540]: Secondary Header is updated. Nov 12 20:46:44.560823 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:44.571668 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:44.572080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:44.597670 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:45.606679 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:45.607705 disk-uuid[541]: The operation has completed successfully. Nov 12 20:46:45.685861 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:46:45.686018 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:46:45.710886 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:46:45.745178 sh[566]: Success Nov 12 20:46:45.767879 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:46:45.859038 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:46:45.866636 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:46:45.895211 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:46:45.931429 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:46:45.931510 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:45.931537 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:46:45.940868 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:46:45.947700 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:46:45.982682 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:46:45.988940 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:46:45.989868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:46:45.995877 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:46:46.053976 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:46.054018 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:46.054054 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:46:46.069067 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:46:46.069162 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:46:46.079941 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:46:46.113835 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:46.109297 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:46:46.141982 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:46:46.310381 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:46:46.323275 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:46:46.317567 ignition[643]: Ignition 2.19.0 Nov 12 20:46:46.352924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:46:46.317579 ignition[643]: Stage: fetch-offline Nov 12 20:46:46.317633 ignition[643]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.317675 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.389465 systemd-networkd[755]: lo: Link UP Nov 12 20:46:46.317798 ignition[643]: parsed url from cmdline: "" Nov 12 20:46:46.389471 systemd-networkd[755]: lo: Gained carrier Nov 12 20:46:46.317805 ignition[643]: no config URL provided Nov 12 20:46:46.391243 systemd-networkd[755]: Enumeration completed Nov 12 20:46:46.317814 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:46:46.391775 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:46:46.317825 ignition[643]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:46:46.391969 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:46:46.317833 ignition[643]: failed to fetch config: resource requires networking Nov 12 20:46:46.391977 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:46:46.318135 ignition[643]: Ignition finished successfully Nov 12 20:46:46.393966 systemd-networkd[755]: eth0: Link UP Nov 12 20:46:46.480182 ignition[758]: Ignition 2.19.0 Nov 12 20:46:46.393971 systemd-networkd[755]: eth0: Gained carrier Nov 12 20:46:46.480191 ignition[758]: Stage: fetch Nov 12 20:46:46.393982 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:46:46.480419 ignition[758]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.405026 systemd[1]: Reached target network.target - Network. Nov 12 20:46:46.480431 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.411771 systemd-networkd[755]: eth0: DHCPv4 address 10.128.0.68/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 12 20:46:46.480551 ignition[758]: parsed url from cmdline: "" Nov 12 20:46:46.436912 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:46:46.480557 ignition[758]: no config URL provided Nov 12 20:46:46.492310 unknown[758]: fetched base config from "system" Nov 12 20:46:46.480567 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:46:46.492323 unknown[758]: fetched base config from "system" Nov 12 20:46:46.480579 ignition[758]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:46:46.492334 unknown[758]: fetched user config from "gcp" Nov 12 20:46:46.480603 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 12 20:46:46.495812 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:46:46.486769 ignition[758]: GET result: OK Nov 12 20:46:46.523951 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:46:46.486854 ignition[758]: parsing config with SHA512: 1966b40a0506892d93ef095a8dca5801318fba49c111064cea4600131903bf527ef266d9d0d962dfc597d5896f84a4ce78ff7d61929a8dbfba50e915c4f1d804 Nov 12 20:46:46.552585 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:46:46.493727 ignition[758]: fetch: fetch complete Nov 12 20:46:46.568909 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:46:46.493752 ignition[758]: fetch: fetch passed Nov 12 20:46:46.603804 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:46:46.493846 ignition[758]: Ignition finished successfully Nov 12 20:46:46.628734 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:46:46.549998 ignition[765]: Ignition 2.19.0 Nov 12 20:46:46.636072 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:46:46.550008 ignition[765]: Stage: kargs Nov 12 20:46:46.663963 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:46:46.550250 ignition[765]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.674027 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:46:46.550268 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.704929 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:46:46.551301 ignition[765]: kargs: kargs passed Nov 12 20:46:46.721878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:46:46.551365 ignition[765]: Ignition finished successfully Nov 12 20:46:46.601249 ignition[770]: Ignition 2.19.0 Nov 12 20:46:46.601259 ignition[770]: Stage: disks Nov 12 20:46:46.601464 ignition[770]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.601476 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.602502 ignition[770]: disks: disks passed Nov 12 20:46:46.602559 ignition[770]: Ignition finished successfully Nov 12 20:46:46.783677 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 12 20:46:46.957788 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:46:46.990833 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:46:47.110697 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:46:47.111552 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:46:47.120524 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:46:47.144797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:46:47.164134 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:46:47.181454 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:46:47.252947 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Nov 12 20:46:47.253002 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:47.253029 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:47.253053 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:46:47.253074 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:46:47.253090 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:46:47.181544 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:46:47.181585 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:46:47.226542 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:46:47.263582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:46:47.293894 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:46:47.416931 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:46:47.426841 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:46:47.436815 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:46:47.446804 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:46:47.586167 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:46:47.591791 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:46:47.631695 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:47.634149 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:46:47.644141 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:46:47.674588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:46:47.685330 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:46:47.711876 ignition[902]: INFO : Ignition 2.19.0 Nov 12 20:46:47.711876 ignition[902]: INFO : Stage: mount Nov 12 20:46:47.711876 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:47.711876 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:47.711876 ignition[902]: INFO : mount: mount passed Nov 12 20:46:47.711876 ignition[902]: INFO : Ignition finished successfully Nov 12 20:46:47.708798 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:46:48.118973 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:46:48.168712 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (914) Nov 12 20:46:48.187721 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:48.187820 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:48.187848 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:46:48.211405 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:46:48.211497 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:46:48.215272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:46:48.266909 ignition[931]: INFO : Ignition 2.19.0 Nov 12 20:46:48.266909 ignition[931]: INFO : Stage: files Nov 12 20:46:48.283888 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:48.283888 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:48.283888 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:46:48.283888 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:46:48.280811 unknown[931]: wrote ssh authorized keys file for user: core Nov 12 20:46:48.384852 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:46:48.384852 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:46:48.344854 systemd-networkd[755]: eth0: Gained IPv6LL Nov 12 20:46:52.567516 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:46:52.988891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:46:52.988891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:46:53.287971 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:46:53.642187 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.642187 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:46:53.681805 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:46:53.681805 ignition[931]: INFO : files: files passed Nov 12 20:46:53.681805 ignition[931]: INFO : Ignition finished successfully Nov 12 20:46:53.646027 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:46:53.677929 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:46:53.682952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:46:53.723400 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:46:53.889838 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:46:53.889838 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:46:53.723520 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:46:53.938894 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:46:53.805375 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:46:53.812339 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:46:53.844937 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:46:53.907043 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:46:53.907167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:46:53.929784 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:46:53.949032 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:46:53.973153 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:46:53.979886 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:46:54.048071 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:46:54.074950 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:46:54.125791 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:46:54.128153 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:46:54.148453 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:46:54.168191 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:46:54.168391 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:46:54.212897 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:46:54.213342 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:46:54.230231 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:46:54.263017 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:46:54.263413 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:46:54.301029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:46:54.301435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:46:54.329274 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:46:54.340241 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:46:54.357237 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:46:54.374185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:46:54.374394 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:46:54.414919 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:46:54.415337 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:46:54.452965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:46:54.453310 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:46:54.462141 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:46:54.462334 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:46:54.510078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:46:54.510466 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:46:54.520228 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:46:54.520410 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:46:54.564907 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:46:54.586667 ignition[983]: INFO : Ignition 2.19.0 Nov 12 20:46:54.586667 ignition[983]: INFO : Stage: umount Nov 12 20:46:54.586667 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:54.586667 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:54.618819 ignition[983]: INFO : umount: umount passed Nov 12 20:46:54.618819 ignition[983]: INFO : Ignition finished successfully Nov 12 20:46:54.593410 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:46:54.641828 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:46:54.642123 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:46:54.661058 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:46:54.661251 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:46:54.703217 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:46:54.704864 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:46:54.705006 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:46:54.721501 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:46:54.721624 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:46:54.745835 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:46:54.746011 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:46:54.756034 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:46:54.756116 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:46:54.774959 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:46:54.775047 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:46:54.793007 systemd[1]: Stopped target network.target - Network. Nov 12 20:46:54.804047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:46:54.804136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:46:54.832031 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:46:54.848972 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:46:54.852766 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:46:54.867977 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:46:54.890985 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:46:54.915044 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:46:54.915130 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:46:54.925074 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:46:54.925143 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:46:54.959065 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:46:54.959158 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:46:54.970101 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:46:54.970191 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:46:54.991428 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:46:54.996761 systemd-networkd[755]: eth0: DHCPv6 lease lost Nov 12 20:46:55.019089 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:46:55.037384 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:46:55.037530 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:46:55.057324 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:46:55.057729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:46:55.075414 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:46:55.075537 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:46:55.085043 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:46:55.085117 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:46:55.100079 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:46:55.100162 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:46:55.124774 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:46:55.153776 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:46:55.153948 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:46:55.174942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:46:55.175043 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:46:55.194936 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:46:55.195020 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:46:55.211958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:46:55.212053 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:46:55.233117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:46:55.252466 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:46:55.252665 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:46:55.268110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:46:55.268269 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:46:55.290004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:46:55.290068 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:46:55.300112 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:46:55.300190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:46:55.335058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:46:55.335289 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:46:55.380955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:46:55.691968 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 12 20:46:55.381080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:55.434955 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:46:55.471033 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:46:55.471157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:46:55.492075 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:46:55.492159 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:46:55.513989 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:46:55.514066 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:46:55.525126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:46:55.525203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:55.543601 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:46:55.543778 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:46:55.563436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:46:55.563552 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:46:55.592231 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:46:55.616912 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:46:55.645611 systemd[1]: Switching root. Nov 12 20:46:55.822215 systemd-journald[183]: Journal stopped Nov 12 20:46:43.128473 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:46:43.128515 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:46:43.128533 kernel: BIOS-provided physical RAM map: Nov 12 20:46:43.128548 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Nov 12 20:46:43.128561 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Nov 12 20:46:43.128574 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Nov 12 20:46:43.128610 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Nov 12 20:46:43.128628 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Nov 12 20:46:43.128857 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Nov 12 20:46:43.128876 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Nov 12 20:46:43.128891 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Nov 12 20:46:43.128905 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Nov 12 20:46:43.128920 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Nov 12 20:46:43.128935 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Nov 12 20:46:43.128960 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Nov 12 20:46:43.128975 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Nov 12 20:46:43.128992 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Nov 12 20:46:43.129008 kernel: NX (Execute Disable) protection: active Nov 12 20:46:43.129025 kernel: APIC: Static calls initialized Nov 12 20:46:43.129042 kernel: efi: EFI v2.7 by EDK II Nov 12 20:46:43.129059 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Nov 12 20:46:43.129076 kernel: SMBIOS 2.4 present. Nov 12 20:46:43.129093 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Nov 12 20:46:43.129109 kernel: Hypervisor detected: KVM Nov 12 20:46:43.129128 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 12 20:46:43.129143 kernel: kvm-clock: using sched offset of 12149096538 cycles Nov 12 20:46:43.129159 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 12 20:46:43.129175 kernel: tsc: Detected 2299.998 MHz processor Nov 12 20:46:43.129192 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:46:43.129209 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:46:43.129233 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Nov 12 20:46:43.129250 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Nov 12 20:46:43.129267 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:46:43.129285 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Nov 12 20:46:43.129300 kernel: Using GB pages for direct mapping Nov 12 20:46:43.129315 kernel: Secure boot disabled Nov 12 20:46:43.129331 kernel: ACPI: Early table checksum verification disabled Nov 12 20:46:43.129345 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Nov 12 20:46:43.129361 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Nov 12 20:46:43.129379 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Nov 12 20:46:43.129403 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Nov 12 20:46:43.129425 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Nov 12 20:46:43.129443 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Nov 12 20:46:43.129461 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Nov 12 20:46:43.129479 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Nov 12 20:46:43.129498 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Nov 12 20:46:43.129516 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Nov 12 20:46:43.129536 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Nov 12 20:46:43.129553 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Nov 12 20:46:43.129569 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Nov 12 20:46:43.129585 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Nov 12 20:46:43.129603 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Nov 12 20:46:43.129620 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Nov 12 20:46:43.129638 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Nov 12 20:46:43.130704 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Nov 12 20:46:43.130723 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Nov 12 20:46:43.130749 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Nov 12 20:46:43.130767 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:46:43.130784 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:46:43.130802 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 12 20:46:43.130820 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Nov 12 20:46:43.130838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Nov 12 20:46:43.130856 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Nov 12 20:46:43.130875 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Nov 12 20:46:43.130893 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Nov 12 20:46:43.130915 kernel: Zone ranges: Nov 12 20:46:43.130933 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:46:43.130951 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:46:43.130968 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:46:43.130985 kernel: Movable zone start for each node Nov 12 20:46:43.131003 kernel: Early memory node ranges Nov 12 20:46:43.131020 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Nov 12 20:46:43.131038 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Nov 12 20:46:43.131056 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Nov 12 20:46:43.131079 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Nov 12 20:46:43.131096 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Nov 12 20:46:43.131113 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Nov 12 20:46:43.131131 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:46:43.131149 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Nov 12 20:46:43.131167 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Nov 12 20:46:43.131184 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 12 20:46:43.131202 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Nov 12 20:46:43.131229 kernel: ACPI: PM-Timer IO Port: 0xb008 Nov 12 20:46:43.131252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 12 20:46:43.131271 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:46:43.131289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 12 20:46:43.131305 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:46:43.131323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 12 20:46:43.131340 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 12 20:46:43.131358 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:46:43.131376 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:46:43.131393 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Nov 12 20:46:43.131415 kernel: Booting paravirtualized kernel on KVM Nov 12 20:46:43.131432 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:46:43.131448 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:46:43.131465 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:46:43.131483 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:46:43.131498 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:46:43.131514 kernel: kvm-guest: PV spinlocks enabled Nov 12 20:46:43.131532 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:46:43.131551 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:46:43.131574 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:46:43.131590 kernel: random: crng init done Nov 12 20:46:43.131607 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:46:43.131624 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:46:43.131656 kernel: Fallback order for Node 0: 0 Nov 12 20:46:43.131674 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Nov 12 20:46:43.131691 kernel: Policy zone: Normal Nov 12 20:46:43.131707 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:46:43.131730 kernel: software IO TLB: area num 2. Nov 12 20:46:43.131749 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 346940K reserved, 0K cma-reserved) Nov 12 20:46:43.131768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:46:43.131786 kernel: Kernel/User page tables isolation: enabled Nov 12 20:46:43.131805 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:46:43.131823 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:46:43.131841 kernel: Dynamic Preempt: voluntary Nov 12 20:46:43.131860 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:46:43.131880 kernel: rcu: RCU event tracing is enabled. Nov 12 20:46:43.131920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:46:43.131939 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:46:43.131959 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:46:43.131981 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:46:43.132001 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:46:43.132020 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:46:43.132039 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 12 20:46:43.132059 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:46:43.132078 kernel: Console: colour dummy device 80x25 Nov 12 20:46:43.132100 kernel: printk: console [ttyS0] enabled Nov 12 20:46:43.132118 kernel: ACPI: Core revision 20230628 Nov 12 20:46:43.132136 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:46:43.132154 kernel: x2apic enabled Nov 12 20:46:43.132173 kernel: APIC: Switched APIC routing to: physical x2apic Nov 12 20:46:43.132190 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Nov 12 20:46:43.132208 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:46:43.132237 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Nov 12 20:46:43.132259 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Nov 12 20:46:43.132276 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Nov 12 20:46:43.132294 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:46:43.132312 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Nov 12 20:46:43.132329 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Nov 12 20:46:43.132347 kernel: Spectre V2 : Mitigation: IBRS Nov 12 20:46:43.132365 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:46:43.132383 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:46:43.132401 kernel: RETBleed: Mitigation: IBRS Nov 12 20:46:43.132424 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 12 20:46:43.132441 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Nov 12 20:46:43.132460 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 12 20:46:43.132479 kernel: MDS: Mitigation: Clear CPU buffers Nov 12 20:46:43.132499 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:46:43.132518 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:46:43.132536 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:46:43.132554 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:46:43.132573 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:46:43.132595 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 12 20:46:43.132613 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:46:43.132630 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:46:43.134719 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:46:43.134745 kernel: landlock: Up and running. Nov 12 20:46:43.134766 kernel: SELinux: Initializing. Nov 12 20:46:43.134785 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.134804 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.134824 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Nov 12 20:46:43.134849 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:46:43.134869 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:46:43.134889 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:46:43.134909 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Nov 12 20:46:43.134928 kernel: signal: max sigframe size: 1776 Nov 12 20:46:43.134947 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:46:43.134968 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:46:43.134986 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:46:43.135005 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:46:43.135029 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:46:43.135049 kernel: .... node #0, CPUs: #1 Nov 12 20:46:43.135069 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Nov 12 20:46:43.135090 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:46:43.135109 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:46:43.135128 kernel: smpboot: Max logical packages: 1 Nov 12 20:46:43.135148 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Nov 12 20:46:43.135168 kernel: devtmpfs: initialized Nov 12 20:46:43.135191 kernel: x86/mm: Memory block size: 128MB Nov 12 20:46:43.135211 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Nov 12 20:46:43.135238 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:46:43.135257 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:46:43.135276 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:46:43.135296 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:46:43.135315 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:46:43.135335 kernel: audit: type=2000 audit(1731444401.980:1): state=initialized audit_enabled=0 res=1 Nov 12 20:46:43.135354 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:46:43.135377 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:46:43.135396 kernel: cpuidle: using governor menu Nov 12 20:46:43.135415 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:46:43.135434 kernel: dca service started, version 1.12.1 Nov 12 20:46:43.135454 kernel: PCI: Using configuration type 1 for base access Nov 12 20:46:43.135473 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:46:43.135494 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:46:43.135513 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:46:43.135532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:46:43.135555 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:46:43.135575 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:46:43.135594 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:46:43.135613 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:46:43.135633 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:46:43.136691 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Nov 12 20:46:43.136715 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:46:43.136734 kernel: ACPI: Interpreter enabled Nov 12 20:46:43.136752 kernel: ACPI: PM: (supports S0 S3 S5) Nov 12 20:46:43.136777 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:46:43.136795 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:46:43.136814 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:46:43.136829 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Nov 12 20:46:43.136846 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 12 20:46:43.137130 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Nov 12 20:46:43.137345 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Nov 12 20:46:43.137540 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Nov 12 20:46:43.137565 kernel: PCI host bridge to bus 0000:00 Nov 12 20:46:43.139728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 12 20:46:43.139915 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 12 20:46:43.140087 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 12 20:46:43.140255 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Nov 12 20:46:43.140421 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 12 20:46:43.140628 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Nov 12 20:46:43.140852 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Nov 12 20:46:43.141045 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Nov 12 20:46:43.141237 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Nov 12 20:46:43.141432 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Nov 12 20:46:43.141615 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 12 20:46:43.142899 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Nov 12 20:46:43.143119 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Nov 12 20:46:43.143338 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Nov 12 20:46:43.143530 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Nov 12 20:46:43.144863 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Nov 12 20:46:43.145068 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Nov 12 20:46:43.145263 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Nov 12 20:46:43.145296 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 12 20:46:43.145317 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 12 20:46:43.145337 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 12 20:46:43.145357 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 12 20:46:43.145377 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Nov 12 20:46:43.145397 kernel: iommu: Default domain type: Translated Nov 12 20:46:43.145417 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:46:43.145436 kernel: efivars: Registered efivars operations Nov 12 20:46:43.145457 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:46:43.145480 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 12 20:46:43.145500 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Nov 12 20:46:43.145520 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Nov 12 20:46:43.145539 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Nov 12 20:46:43.145558 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Nov 12 20:46:43.145577 kernel: vgaarb: loaded Nov 12 20:46:43.145597 kernel: clocksource: Switched to clocksource kvm-clock Nov 12 20:46:43.145617 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:46:43.145637 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:46:43.147713 kernel: pnp: PnP ACPI init Nov 12 20:46:43.147733 kernel: pnp: PnP ACPI: found 7 devices Nov 12 20:46:43.147754 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:46:43.147774 kernel: NET: Registered PF_INET protocol family Nov 12 20:46:43.147794 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:46:43.147814 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:46:43.147834 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:46:43.147854 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:46:43.147873 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:46:43.147897 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:46:43.147917 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.147936 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:46:43.147956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:46:43.147975 kernel: NET: Registered PF_XDP protocol family Nov 12 20:46:43.148177 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 12 20:46:43.148353 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 12 20:46:43.148518 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 12 20:46:43.149745 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Nov 12 20:46:43.149974 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Nov 12 20:46:43.150003 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:46:43.150025 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:46:43.150045 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Nov 12 20:46:43.150066 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:46:43.150087 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Nov 12 20:46:43.150107 kernel: clocksource: Switched to clocksource tsc Nov 12 20:46:43.150134 kernel: Initialise system trusted keyrings Nov 12 20:46:43.150154 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:46:43.150174 kernel: Key type asymmetric registered Nov 12 20:46:43.150194 kernel: Asymmetric key parser 'x509' registered Nov 12 20:46:43.150213 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:46:43.150239 kernel: io scheduler mq-deadline registered Nov 12 20:46:43.150259 kernel: io scheduler kyber registered Nov 12 20:46:43.150278 kernel: io scheduler bfq registered Nov 12 20:46:43.150298 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:46:43.150323 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Nov 12 20:46:43.150511 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Nov 12 20:46:43.150537 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Nov 12 20:46:43.151702 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Nov 12 20:46:43.151736 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Nov 12 20:46:43.151930 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Nov 12 20:46:43.151956 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:46:43.151975 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:46:43.151995 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:46:43.152022 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Nov 12 20:46:43.152042 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Nov 12 20:46:43.152245 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Nov 12 20:46:43.152272 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 12 20:46:43.152293 kernel: i8042: Warning: Keylock active Nov 12 20:46:43.152312 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 12 20:46:43.152332 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 12 20:46:43.152522 kernel: rtc_cmos 00:00: RTC can wake from S4 Nov 12 20:46:43.152794 kernel: rtc_cmos 00:00: registered as rtc0 Nov 12 20:46:43.152971 kernel: rtc_cmos 00:00: setting system clock to 2024-11-12T20:46:42 UTC (1731444402) Nov 12 20:46:43.153144 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Nov 12 20:46:43.153169 kernel: intel_pstate: CPU model not supported Nov 12 20:46:43.153189 kernel: pstore: Using crash dump compression: deflate Nov 12 20:46:43.153209 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:46:43.153237 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:46:43.153257 kernel: Segment Routing with IPv6 Nov 12 20:46:43.153282 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:46:43.153302 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:46:43.153322 kernel: Key type dns_resolver registered Nov 12 20:46:43.153341 kernel: IPI shorthand broadcast: enabled Nov 12 20:46:43.153361 kernel: sched_clock: Marking stable (888004160, 132986921)->(1079361581, -58370500) Nov 12 20:46:43.153380 kernel: registered taskstats version 1 Nov 12 20:46:43.153400 kernel: Loading compiled-in X.509 certificates Nov 12 20:46:43.153420 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:46:43.153439 kernel: Key type .fscrypt registered Nov 12 20:46:43.153462 kernel: Key type fscrypt-provisioning registered Nov 12 20:46:43.153481 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:46:43.153501 kernel: ima: No architecture policies found Nov 12 20:46:43.153521 kernel: clk: Disabling unused clocks Nov 12 20:46:43.153541 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:46:43.153560 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:46:43.153579 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:46:43.153600 kernel: Run /init as init process Nov 12 20:46:43.153624 kernel: with arguments: Nov 12 20:46:43.153662 kernel: /init Nov 12 20:46:43.153680 kernel: with environment: Nov 12 20:46:43.153699 kernel: HOME=/ Nov 12 20:46:43.153718 kernel: TERM=linux Nov 12 20:46:43.153737 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:46:43.153757 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 12 20:46:43.153782 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:46:43.153813 systemd[1]: Detected virtualization google. Nov 12 20:46:43.153835 systemd[1]: Detected architecture x86-64. Nov 12 20:46:43.153855 systemd[1]: Running in initrd. Nov 12 20:46:43.153876 systemd[1]: No hostname configured, using default hostname. Nov 12 20:46:43.153897 systemd[1]: Hostname set to . Nov 12 20:46:43.153919 systemd[1]: Initializing machine ID from random generator. Nov 12 20:46:43.153940 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:46:43.153962 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:46:43.153988 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:46:43.154011 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:46:43.154033 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:46:43.154054 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:46:43.154076 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:46:43.154101 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:46:43.154128 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:46:43.154150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:46:43.154172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:46:43.154224 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:46:43.154252 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:46:43.154274 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:46:43.154297 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:46:43.154324 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:46:43.154347 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:46:43.154369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:46:43.154392 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:46:43.154415 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:46:43.154437 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:46:43.154460 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:46:43.154482 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:46:43.154509 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:46:43.154532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:46:43.154554 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:46:43.154577 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:46:43.154599 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:46:43.154622 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:46:43.154668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:46:43.154691 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:46:43.154714 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:46:43.154785 systemd-journald[183]: Collecting audit messages is disabled. Nov 12 20:46:43.154835 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:46:43.154865 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:46:43.154889 systemd-journald[183]: Journal started Nov 12 20:46:43.154933 systemd-journald[183]: Runtime Journal (/run/log/journal/27e8d3488d644e15aa1904459c0cd899) is 8.0M, max 148.7M, 140.7M free. Nov 12 20:46:43.126786 systemd-modules-load[184]: Inserted module 'overlay' Nov 12 20:46:43.163838 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:43.170671 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:46:43.175466 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:46:43.182727 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:46:43.185737 kernel: Bridge firewalling registered Nov 12 20:46:43.185628 systemd-modules-load[184]: Inserted module 'br_netfilter' Nov 12 20:46:43.187896 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:46:43.195888 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:46:43.204867 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:46:43.211169 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:46:43.224859 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:46:43.232891 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:46:43.240251 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:46:43.253215 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:43.263070 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:46:43.266904 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:46:43.279870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:46:43.296257 dracut-cmdline[216]: dracut-dracut-053 Nov 12 20:46:43.300721 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:46:43.346011 systemd-resolved[217]: Positive Trust Anchors: Nov 12 20:46:43.346611 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:46:43.346848 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:46:43.356351 systemd-resolved[217]: Defaulting to hostname 'linux'. Nov 12 20:46:43.360330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:46:43.376930 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:46:43.407688 kernel: SCSI subsystem initialized Nov 12 20:46:43.418695 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:46:43.430679 kernel: iscsi: registered transport (tcp) Nov 12 20:46:43.454675 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:46:43.454768 kernel: QLogic iSCSI HBA Driver Nov 12 20:46:43.510933 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:46:43.517037 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:46:43.559820 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:46:43.559910 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:46:43.559938 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:46:43.607687 kernel: raid6: avx2x4 gen() 17717 MB/s Nov 12 20:46:43.624689 kernel: raid6: avx2x2 gen() 17667 MB/s Nov 12 20:46:43.642225 kernel: raid6: avx2x1 gen() 13794 MB/s Nov 12 20:46:43.642298 kernel: raid6: using algorithm avx2x4 gen() 17717 MB/s Nov 12 20:46:43.660370 kernel: raid6: .... xor() 7721 MB/s, rmw enabled Nov 12 20:46:43.660433 kernel: raid6: using avx2x2 recovery algorithm Nov 12 20:46:43.683708 kernel: xor: automatically using best checksumming function avx Nov 12 20:46:43.860682 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:46:43.874481 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:46:43.880880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:46:43.910263 systemd-udevd[400]: Using default interface naming scheme 'v255'. Nov 12 20:46:43.918134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:46:43.927148 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:46:43.963105 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Nov 12 20:46:44.001190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:46:44.014918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:46:44.095382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:46:44.105899 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:46:44.149912 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:46:44.160921 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:46:44.170774 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:46:44.177041 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:46:44.191049 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:46:44.220676 kernel: scsi host0: Virtio SCSI HBA Nov 12 20:46:44.240740 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Nov 12 20:46:44.268295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:46:44.296903 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:46:44.317069 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:46:44.317144 kernel: AES CTR mode by8 optimization enabled Nov 12 20:46:44.336246 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:46:44.336714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:44.348986 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:46:44.352686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:46:44.353825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:44.370218 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Nov 12 20:46:44.386782 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Nov 12 20:46:44.387047 kernel: sd 0:0:1:0: [sda] Write Protect is off Nov 12 20:46:44.387293 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Nov 12 20:46:44.387543 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 12 20:46:44.387798 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 20:46:44.387827 kernel: GPT:17805311 != 25165823 Nov 12 20:46:44.387852 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 20:46:44.387878 kernel: GPT:17805311 != 25165823 Nov 12 20:46:44.387901 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 20:46:44.387926 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:44.387952 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Nov 12 20:46:44.370345 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:46:44.384451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:46:44.418561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:44.454670 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (450) Nov 12 20:46:44.462667 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (453) Nov 12 20:46:44.467964 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Nov 12 20:46:44.483916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Nov 12 20:46:44.496632 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 12 20:46:44.507976 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Nov 12 20:46:44.508284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Nov 12 20:46:44.523937 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:46:44.531200 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:46:44.545072 disk-uuid[540]: Primary Header is updated. Nov 12 20:46:44.545072 disk-uuid[540]: Secondary Entries is updated. Nov 12 20:46:44.545072 disk-uuid[540]: Secondary Header is updated. Nov 12 20:46:44.560823 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:44.571668 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:44.572080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:44.597670 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:45.606679 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:46:45.607705 disk-uuid[541]: The operation has completed successfully. Nov 12 20:46:45.685861 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:46:45.686018 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:46:45.710886 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:46:45.745178 sh[566]: Success Nov 12 20:46:45.767879 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:46:45.859038 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:46:45.866636 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:46:45.895211 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:46:45.931429 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:46:45.931510 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:45.931537 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:46:45.940868 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:46:45.947700 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:46:45.982682 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 20:46:45.988940 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:46:45.989868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:46:45.995877 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:46:46.053976 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:46.054018 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:46.054054 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:46:46.069067 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:46:46.069162 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:46:46.079941 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:46:46.113835 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:46.109297 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:46:46.141982 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:46:46.310381 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:46:46.323275 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:46:46.317567 ignition[643]: Ignition 2.19.0 Nov 12 20:46:46.352924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:46:46.317579 ignition[643]: Stage: fetch-offline Nov 12 20:46:46.317633 ignition[643]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.317675 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.389465 systemd-networkd[755]: lo: Link UP Nov 12 20:46:46.317798 ignition[643]: parsed url from cmdline: "" Nov 12 20:46:46.389471 systemd-networkd[755]: lo: Gained carrier Nov 12 20:46:46.317805 ignition[643]: no config URL provided Nov 12 20:46:46.391243 systemd-networkd[755]: Enumeration completed Nov 12 20:46:46.317814 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:46:46.391775 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:46:46.317825 ignition[643]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:46:46.391969 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:46:46.317833 ignition[643]: failed to fetch config: resource requires networking Nov 12 20:46:46.391977 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:46:46.318135 ignition[643]: Ignition finished successfully Nov 12 20:46:46.393966 systemd-networkd[755]: eth0: Link UP Nov 12 20:46:46.480182 ignition[758]: Ignition 2.19.0 Nov 12 20:46:46.393971 systemd-networkd[755]: eth0: Gained carrier Nov 12 20:46:46.480191 ignition[758]: Stage: fetch Nov 12 20:46:46.393982 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:46:46.480419 ignition[758]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.405026 systemd[1]: Reached target network.target - Network. Nov 12 20:46:46.480431 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.411771 systemd-networkd[755]: eth0: DHCPv4 address 10.128.0.68/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 12 20:46:46.480551 ignition[758]: parsed url from cmdline: "" Nov 12 20:46:46.436912 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:46:46.480557 ignition[758]: no config URL provided Nov 12 20:46:46.492310 unknown[758]: fetched base config from "system" Nov 12 20:46:46.480567 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:46:46.492323 unknown[758]: fetched base config from "system" Nov 12 20:46:46.480579 ignition[758]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:46:46.492334 unknown[758]: fetched user config from "gcp" Nov 12 20:46:46.480603 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Nov 12 20:46:46.495812 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:46:46.486769 ignition[758]: GET result: OK Nov 12 20:46:46.523951 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:46:46.486854 ignition[758]: parsing config with SHA512: 1966b40a0506892d93ef095a8dca5801318fba49c111064cea4600131903bf527ef266d9d0d962dfc597d5896f84a4ce78ff7d61929a8dbfba50e915c4f1d804 Nov 12 20:46:46.552585 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:46:46.493727 ignition[758]: fetch: fetch complete Nov 12 20:46:46.568909 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:46:46.493752 ignition[758]: fetch: fetch passed Nov 12 20:46:46.603804 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:46:46.493846 ignition[758]: Ignition finished successfully Nov 12 20:46:46.628734 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:46:46.549998 ignition[765]: Ignition 2.19.0 Nov 12 20:46:46.636072 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:46:46.550008 ignition[765]: Stage: kargs Nov 12 20:46:46.663963 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:46:46.550250 ignition[765]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.674027 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:46:46.550268 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.704929 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:46:46.551301 ignition[765]: kargs: kargs passed Nov 12 20:46:46.721878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:46:46.551365 ignition[765]: Ignition finished successfully Nov 12 20:46:46.601249 ignition[770]: Ignition 2.19.0 Nov 12 20:46:46.601259 ignition[770]: Stage: disks Nov 12 20:46:46.601464 ignition[770]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:46.601476 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:46.602502 ignition[770]: disks: disks passed Nov 12 20:46:46.602559 ignition[770]: Ignition finished successfully Nov 12 20:46:46.783677 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 12 20:46:46.957788 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:46:46.990833 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:46:47.110697 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:46:47.111552 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:46:47.120524 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:46:47.144797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:46:47.164134 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:46:47.181454 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 20:46:47.252947 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Nov 12 20:46:47.253002 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:47.253029 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:47.253053 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:46:47.253074 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:46:47.253090 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:46:47.181544 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:46:47.181585 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:46:47.226542 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:46:47.263582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:46:47.293894 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:46:47.416931 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:46:47.426841 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:46:47.436815 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:46:47.446804 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:46:47.586167 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:46:47.591791 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:46:47.631695 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:47.634149 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:46:47.644141 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:46:47.674588 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:46:47.685330 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:46:47.711876 ignition[902]: INFO : Ignition 2.19.0 Nov 12 20:46:47.711876 ignition[902]: INFO : Stage: mount Nov 12 20:46:47.711876 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:47.711876 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:47.711876 ignition[902]: INFO : mount: mount passed Nov 12 20:46:47.711876 ignition[902]: INFO : Ignition finished successfully Nov 12 20:46:47.708798 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:46:48.118973 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:46:48.168712 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (914) Nov 12 20:46:48.187721 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:46:48.187820 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:46:48.187848 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:46:48.211405 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 12 20:46:48.211497 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:46:48.215272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:46:48.266909 ignition[931]: INFO : Ignition 2.19.0 Nov 12 20:46:48.266909 ignition[931]: INFO : Stage: files Nov 12 20:46:48.283888 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:48.283888 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:48.283888 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:46:48.283888 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:46:48.283888 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:46:48.280811 unknown[931]: wrote ssh authorized keys file for user: core Nov 12 20:46:48.384852 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:46:48.384852 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:46:48.344854 systemd-networkd[755]: eth0: Gained IPv6LL Nov 12 20:46:52.567516 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:46:52.988891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:46:52.988891 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.020810 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Nov 12 20:46:53.287971 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:46:53.642187 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Nov 12 20:46:53.642187 ignition[931]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:46:53.681805 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:46:53.681805 ignition[931]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:46:53.681805 ignition[931]: INFO : files: files passed Nov 12 20:46:53.681805 ignition[931]: INFO : Ignition finished successfully Nov 12 20:46:53.646027 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:46:53.677929 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:46:53.682952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:46:53.723400 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:46:53.889838 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:46:53.889838 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:46:53.723520 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:46:53.938894 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:46:53.805375 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:46:53.812339 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:46:53.844937 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:46:53.907043 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:46:53.907167 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:46:53.929784 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:46:53.949032 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:46:53.973153 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:46:53.979886 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:46:54.048071 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:46:54.074950 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:46:54.125791 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:46:54.128153 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:46:54.148453 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:46:54.168191 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:46:54.168391 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:46:54.212897 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:46:54.213342 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:46:54.230231 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:46:54.263017 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:46:54.263413 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:46:54.301029 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:46:54.301435 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:46:54.329274 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:46:54.340241 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:46:54.357237 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:46:54.374185 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:46:54.374394 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:46:54.414919 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:46:54.415337 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:46:54.452965 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:46:54.453310 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:46:54.462141 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:46:54.462334 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:46:54.510078 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:46:54.510466 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:46:54.520228 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:46:54.520410 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:46:54.564907 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:46:54.586667 ignition[983]: INFO : Ignition 2.19.0 Nov 12 20:46:54.586667 ignition[983]: INFO : Stage: umount Nov 12 20:46:54.586667 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:46:54.586667 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Nov 12 20:46:54.618819 ignition[983]: INFO : umount: umount passed Nov 12 20:46:54.618819 ignition[983]: INFO : Ignition finished successfully Nov 12 20:46:54.593410 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:46:54.641828 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:46:54.642123 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:46:54.661058 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:46:54.661251 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:46:54.703217 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:46:54.704864 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:46:54.705006 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:46:54.721501 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:46:54.721624 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:46:54.745835 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:46:54.746011 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:46:54.756034 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:46:54.756116 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:46:54.774959 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:46:54.775047 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:46:54.793007 systemd[1]: Stopped target network.target - Network. Nov 12 20:46:54.804047 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:46:54.804136 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:46:54.832031 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:46:54.848972 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:46:54.852766 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:46:54.867977 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:46:54.890985 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:46:54.915044 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:46:54.915130 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:46:54.925074 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:46:54.925143 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:46:54.959065 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:46:54.959158 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:46:54.970101 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:46:54.970191 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:46:54.991428 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:46:54.996761 systemd-networkd[755]: eth0: DHCPv6 lease lost Nov 12 20:46:55.019089 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:46:55.037384 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:46:55.037530 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:46:55.057324 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:46:55.057729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:46:55.075414 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:46:55.075537 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:46:55.085043 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:46:55.085117 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:46:55.100079 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:46:55.100162 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:46:55.124774 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:46:55.153776 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:46:55.153948 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:46:55.174942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:46:55.175043 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:46:55.194936 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:46:55.195020 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:46:55.211958 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:46:55.212053 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:46:55.233117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:46:55.252466 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:46:55.252665 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:46:55.268110 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:46:55.268269 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:46:55.290004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:46:55.290068 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:46:55.300112 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:46:55.300190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:46:55.335058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:46:55.335289 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:46:55.380955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:46:55.691968 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Nov 12 20:46:55.381080 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:46:55.434955 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:46:55.471033 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:46:55.471157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:46:55.492075 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:46:55.492159 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:46:55.513989 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:46:55.514066 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:46:55.525126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:46:55.525203 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:46:55.543601 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:46:55.543778 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:46:55.563436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:46:55.563552 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:46:55.592231 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:46:55.616912 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:46:55.645611 systemd[1]: Switching root. Nov 12 20:46:55.822215 systemd-journald[183]: Journal stopped Nov 12 20:46:58.207547 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:46:58.207618 kernel: SELinux: policy capability open_perms=1 Nov 12 20:46:58.207673 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:46:58.207691 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:46:58.207702 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:46:58.207712 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:46:58.207725 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:46:58.207740 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:46:58.207752 kernel: audit: type=1403 audit(1731444415.997:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:46:58.207767 systemd[1]: Successfully loaded SELinux policy in 92.180ms. Nov 12 20:46:58.207781 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.742ms. Nov 12 20:46:58.207795 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:46:58.207807 systemd[1]: Detected virtualization google. Nov 12 20:46:58.207819 systemd[1]: Detected architecture x86-64. Nov 12 20:46:58.207838 systemd[1]: Detected first boot. Nov 12 20:46:58.207852 systemd[1]: Initializing machine ID from random generator. Nov 12 20:46:58.207865 zram_generator::config[1024]: No configuration found. Nov 12 20:46:58.207879 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:46:58.207892 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:46:58.207907 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:46:58.207920 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:46:58.207934 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:46:58.207947 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:46:58.207959 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:46:58.207973 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:46:58.207986 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:46:58.208003 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:46:58.208016 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:46:58.208030 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:46:58.208043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:46:58.208056 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:46:58.208070 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:46:58.208083 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:46:58.208096 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:46:58.208112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:46:58.208126 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:46:58.208141 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:46:58.208154 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:46:58.208167 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:46:58.208180 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:46:58.208198 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:46:58.208211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:46:58.208225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:46:58.208241 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:46:58.208254 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:46:58.208268 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:46:58.208281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:46:58.208294 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:46:58.208308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:46:58.208321 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:46:58.208338 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:46:58.208367 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:46:58.208381 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:46:58.208395 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:46:58.208408 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:46:58.208425 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:46:58.208439 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:46:58.208454 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:46:58.208468 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:46:58.208482 systemd[1]: Reached target machines.target - Containers. Nov 12 20:46:58.208495 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:46:58.208509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:46:58.208523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:46:58.208540 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:46:58.208554 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:46:58.208567 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:46:58.208588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:46:58.208602 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:46:58.208616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:46:58.208630 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:46:58.208666 kernel: fuse: init (API version 7.39) Nov 12 20:46:58.208694 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:46:58.208708 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:46:58.208722 kernel: ACPI: bus type drm_connector registered Nov 12 20:46:58.208734 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:46:58.208748 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:46:58.208761 kernel: loop: module loaded Nov 12 20:46:58.208774 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:46:58.208789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:46:58.208803 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:46:58.208820 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:46:58.208869 systemd-journald[1111]: Collecting audit messages is disabled. Nov 12 20:46:58.208899 systemd-journald[1111]: Journal started Nov 12 20:46:58.208944 systemd-journald[1111]: Runtime Journal (/run/log/journal/78204d4669b34db0ba3ca51f82b58aee) is 8.0M, max 148.7M, 140.7M free. Nov 12 20:46:56.930007 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:46:56.957890 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 12 20:46:56.959217 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:46:58.226723 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:46:58.245672 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:46:58.245784 systemd[1]: Stopped verity-setup.service. Nov 12 20:46:58.274809 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:46:58.284730 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:46:58.297332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:46:58.307127 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:46:58.318118 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:46:58.329296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:46:58.339083 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:46:58.350069 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:46:58.360231 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:46:58.372246 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:46:58.384236 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:46:58.384472 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:46:58.396273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:46:58.396519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:46:58.408254 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:46:58.408541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:46:58.419226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:46:58.419452 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:46:58.431212 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:46:58.431449 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:46:58.442753 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:46:58.443035 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:46:58.453279 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:46:58.463186 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:46:58.475233 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:46:58.487217 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:46:58.512034 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:46:58.531830 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:46:58.551838 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:46:58.562834 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:46:58.562912 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:46:58.574156 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:46:58.598012 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:46:58.614933 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:46:58.625077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:46:58.636368 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:46:58.652439 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:46:58.661366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:46:58.665884 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:46:58.675887 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:46:58.682008 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:46:58.687729 systemd-journald[1111]: Time spent on flushing to /var/log/journal/78204d4669b34db0ba3ca51f82b58aee is 76.402ms for 929 entries. Nov 12 20:46:58.687729 systemd-journald[1111]: System Journal (/var/log/journal/78204d4669b34db0ba3ca51f82b58aee) is 8.0M, max 584.8M, 576.8M free. Nov 12 20:46:58.785825 systemd-journald[1111]: Received client request to flush runtime journal. Nov 12 20:46:58.710220 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:46:58.729003 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:46:58.748037 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:46:58.765829 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:46:58.777068 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:46:58.789385 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:46:58.801409 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:46:58.812813 kernel: loop0: detected capacity change from 0 to 142488 Nov 12 20:46:58.819444 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:46:58.832356 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:46:58.861612 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:46:58.886173 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:46:58.915564 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:46:58.921815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:46:58.955846 kernel: loop1: detected capacity change from 0 to 205544 Nov 12 20:46:58.949005 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Nov 12 20:46:58.949042 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Nov 12 20:46:58.965439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:46:58.980058 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:46:58.981298 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:46:59.012138 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:46:59.044711 kernel: loop2: detected capacity change from 0 to 54824 Nov 12 20:46:59.091635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:46:59.117206 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:46:59.139766 kernel: loop3: detected capacity change from 0 to 140768 Nov 12 20:46:59.188179 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Nov 12 20:46:59.188790 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Nov 12 20:46:59.200111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:46:59.252734 kernel: loop4: detected capacity change from 0 to 142488 Nov 12 20:46:59.320870 kernel: loop5: detected capacity change from 0 to 205544 Nov 12 20:46:59.362943 kernel: loop6: detected capacity change from 0 to 54824 Nov 12 20:46:59.408959 kernel: loop7: detected capacity change from 0 to 140768 Nov 12 20:46:59.470454 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Nov 12 20:46:59.471418 (sd-merge)[1170]: Merged extensions into '/usr'. Nov 12 20:46:59.479688 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:46:59.479717 systemd[1]: Reloading... Nov 12 20:46:59.645683 zram_generator::config[1192]: No configuration found. Nov 12 20:46:59.901993 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:46:59.954179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:00.063719 systemd[1]: Reloading finished in 582 ms. Nov 12 20:47:00.093944 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:47:00.104532 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:47:00.132944 systemd[1]: Starting ensure-sysext.service... Nov 12 20:47:00.144948 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:47:00.171815 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:47:00.171854 systemd[1]: Reloading... Nov 12 20:47:00.228316 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:47:00.229531 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:47:00.231489 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:47:00.232635 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 12 20:47:00.232956 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 12 20:47:00.241429 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:47:00.241683 systemd-tmpfiles[1237]: Skipping /boot Nov 12 20:47:00.264777 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:47:00.264974 systemd-tmpfiles[1237]: Skipping /boot Nov 12 20:47:00.324699 zram_generator::config[1263]: No configuration found. Nov 12 20:47:00.468968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:00.537496 systemd[1]: Reloading finished in 364 ms. Nov 12 20:47:00.554968 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:47:00.573476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:47:00.598134 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:47:00.615068 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:47:00.637322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:47:00.657083 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:47:00.676821 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:47:00.697108 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:47:00.705931 augenrules[1326]: No rules Nov 12 20:47:00.710703 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:47:00.729680 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:00.730063 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:00.741101 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:00.754631 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Nov 12 20:47:00.763047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:00.782898 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:00.793004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:00.808146 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:47:00.817824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:00.820058 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:47:00.833082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:47:00.846596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:00.847493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:00.859690 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:47:00.871574 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:00.871839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:00.884474 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:00.885033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:00.897459 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:47:00.964823 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:47:00.982179 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:47:00.984971 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:00.985375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:47:00.998298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:47:01.018990 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:47:01.039889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:47:01.065882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:47:01.078672 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1345) Nov 12 20:47:01.092698 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1345) Nov 12 20:47:01.106521 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 12 20:47:01.114974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:47:01.122901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:47:01.132859 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:47:01.150913 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:47:01.161848 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:47:01.161916 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:47:01.164717 systemd[1]: Finished ensure-sysext.service. Nov 12 20:47:01.173408 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:47:01.173661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:47:01.185348 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:47:01.185618 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:47:01.191265 systemd-resolved[1320]: Positive Trust Anchors: Nov 12 20:47:01.191328 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:47:01.191390 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:47:01.196360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:47:01.196606 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:47:01.204477 systemd-resolved[1320]: Defaulting to hostname 'linux'. Nov 12 20:47:01.217675 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Nov 12 20:47:01.219117 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:47:01.230460 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:47:01.231307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:47:01.258776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 12 20:47:01.259476 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:47:01.278678 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1349) Nov 12 20:47:01.298091 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 12 20:47:01.308571 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 12 20:47:01.331278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:47:01.355422 kernel: ACPI: button: Power Button [PWRF] Nov 12 20:47:01.351442 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Nov 12 20:47:01.361965 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:47:01.362070 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:47:01.391679 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Nov 12 20:47:01.399671 kernel: ACPI: button: Sleep Button [SLPF] Nov 12 20:47:01.408668 kernel: EDAC MC: Ver: 3.0.0 Nov 12 20:47:01.467354 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Nov 12 20:47:01.492226 systemd-networkd[1376]: lo: Link UP Nov 12 20:47:01.492743 systemd-networkd[1376]: lo: Gained carrier Nov 12 20:47:01.500319 systemd-networkd[1376]: Enumeration completed Nov 12 20:47:01.500951 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:47:01.501396 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:47:01.501403 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:47:01.503543 systemd-networkd[1376]: eth0: Link UP Nov 12 20:47:01.503702 systemd-networkd[1376]: eth0: Gained carrier Nov 12 20:47:01.503832 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:47:01.516076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Nov 12 20:47:01.518684 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:47:01.517800 systemd-networkd[1376]: eth0: DHCPv4 address 10.128.0.68/32, gateway 10.128.0.1 acquired from 169.254.169.254 Nov 12 20:47:01.530888 systemd[1]: Reached target network.target - Network. Nov 12 20:47:01.544946 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:47:01.564099 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:47:01.581540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:47:01.582364 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:47:01.598957 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:47:01.612739 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:47:01.635677 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:47:01.667221 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:47:01.668525 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:47:01.672032 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:47:01.692669 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:47:01.720449 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:47:01.732398 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:47:01.745058 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:47:01.756042 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:47:01.767920 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:47:01.780119 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:47:01.790091 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:47:01.801882 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:47:01.813856 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:47:01.813916 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:47:01.822871 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:47:01.832658 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:47:01.844627 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:47:01.864616 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:47:01.875805 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:47:01.886026 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:47:01.895879 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:47:01.904974 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:47:01.905026 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:47:01.910851 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:47:01.938601 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:47:01.955924 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:47:01.973035 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:47:01.998919 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:47:02.008868 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:47:02.016868 jq[1427]: false Nov 12 20:47:02.019918 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:47:02.049441 systemd[1]: Started ntpd.service - Network Time Service. Nov 12 20:47:02.055844 extend-filesystems[1428]: Found loop4 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found loop5 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found loop6 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found loop7 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda1 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda2 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda3 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found usr Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda4 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda6 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda7 Nov 12 20:47:02.064871 extend-filesystems[1428]: Found sda9 Nov 12 20:47:02.064871 extend-filesystems[1428]: Checking size of /dev/sda9 Nov 12 20:47:02.240513 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Nov 12 20:47:02.240562 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Nov 12 20:47:02.240596 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1346) Nov 12 20:47:02.240626 extend-filesystems[1428]: Resized partition /dev/sda9 Nov 12 20:47:02.165522 dbus-daemon[1426]: [system] SELinux support is enabled Nov 12 20:47:02.069817 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: ---------------------------------------------------- Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: corporation. Support and training for ntp-4 are Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: available at https://www.nwtime.org/support Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: ---------------------------------------------------- Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: proto: precision = 0.085 usec (-23) Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: basedate set to 2024-10-31 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: gps base set to 2024-11-03 (week 2339) Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Listen normally on 3 eth0 10.128.0.68:123 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Listen normally on 4 lo [::1]:123 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:44%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:44%2#123 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:44%2 Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:47:02.263407 ntpd[1433]: 12 Nov 20:47:02 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.084 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.084 INFO Fetch successful Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.084 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.084 INFO Fetch successful Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.084 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.084 INFO Fetch successful Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.085 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Nov 12 20:47:02.268229 coreos-metadata[1425]: Nov 12 20:47:02.087 INFO Fetch successful Nov 12 20:47:02.271160 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Nov 12 20:47:02.271160 extend-filesystems[1446]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 12 20:47:02.271160 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 12 20:47:02.271160 extend-filesystems[1446]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Nov 12 20:47:02.169354 dbus-daemon[1426]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1376 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 12 20:47:02.094256 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:47:02.340934 extend-filesystems[1428]: Resized filesystem in /dev/sda9 Nov 12 20:47:02.178368 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:48:25 UTC 2024 (1): Starting Nov 12 20:47:02.133442 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:47:02.178411 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 20:47:02.195634 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:47:02.178426 ntpd[1433]: ---------------------------------------------------- Nov 12 20:47:02.207598 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Nov 12 20:47:02.362980 update_engine[1457]: I20241112 20:47:02.288578 1457 main.cc:92] Flatcar Update Engine starting Nov 12 20:47:02.362980 update_engine[1457]: I20241112 20:47:02.299729 1457 update_check_scheduler.cc:74] Next update check in 4m15s Nov 12 20:47:02.178439 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Nov 12 20:47:02.208450 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:47:02.178453 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 20:47:02.214904 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:47:02.365923 jq[1459]: true Nov 12 20:47:02.178466 ntpd[1433]: corporation. Support and training for ntp-4 are Nov 12 20:47:02.257873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:47:02.178480 ntpd[1433]: available at https://www.nwtime.org/support Nov 12 20:47:02.274849 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:47:02.178493 ntpd[1433]: ---------------------------------------------------- Nov 12 20:47:02.302322 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:47:02.181493 ntpd[1433]: proto: precision = 0.085 usec (-23) Nov 12 20:47:02.302600 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:47:02.182003 ntpd[1433]: basedate set to 2024-10-31 Nov 12 20:47:02.304164 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:47:02.182032 ntpd[1433]: gps base set to 2024-11-03 (week 2339) Nov 12 20:47:02.304859 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:47:02.201493 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 20:47:02.350567 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:47:02.201587 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 20:47:02.350853 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:47:02.204330 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 20:47:02.369505 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:47:02.204395 ntpd[1433]: Listen normally on 3 eth0 10.128.0.68:123 Nov 12 20:47:02.369825 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:47:02.204465 ntpd[1433]: Listen normally on 4 lo [::1]:123 Nov 12 20:47:02.204539 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:44%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 20:47:02.204573 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:44%2#123 Nov 12 20:47:02.204596 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:44%2 Nov 12 20:47:02.204673 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Nov 12 20:47:02.214107 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:47:02.214149 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 20:47:02.427315 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Nov 12 20:47:02.427402 systemd-logind[1454]: Watching system buttons on /dev/input/event3 (Sleep Button) Nov 12 20:47:02.427434 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:47:02.438802 systemd-logind[1454]: New seat seat0. Nov 12 20:47:02.440031 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:47:02.448385 jq[1463]: true Nov 12 20:47:02.468756 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:47:02.469604 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 20:47:02.472401 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:47:02.517579 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:47:02.532585 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:47:02.533817 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:47:02.534742 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:47:02.544547 tar[1462]: linux-amd64/helm Nov 12 20:47:02.559117 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 12 20:47:02.568832 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:47:02.569297 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:47:02.588708 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:47:02.663358 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:47:02.664057 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:47:02.681108 systemd-networkd[1376]: eth0: Gained IPv6LL Nov 12 20:47:02.687045 systemd[1]: Starting sshkeys.service... Nov 12 20:47:02.695603 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:47:02.708045 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:47:02.731570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:02.752979 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:47:02.771544 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Nov 12 20:47:02.819227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 20:47:02.839441 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 20:47:02.846379 init.sh[1501]: + '[' -e /etc/default/instance_configs.cfg.template ']' Nov 12 20:47:02.854433 init.sh[1501]: + echo -e '[InstanceSetup]\nset_host_keys = false' Nov 12 20:47:02.854433 init.sh[1501]: + /usr/bin/google_instance_setup Nov 12 20:47:03.011773 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:47:03.096479 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 12 20:47:03.096750 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 12 20:47:03.097541 dbus-daemon[1426]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1487 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 12 20:47:03.121218 systemd[1]: Starting polkit.service - Authorization Manager... Nov 12 20:47:03.159259 coreos-metadata[1504]: Nov 12 20:47:03.159 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Nov 12 20:47:03.165256 coreos-metadata[1504]: Nov 12 20:47:03.165 INFO Fetch failed with 404: resource not found Nov 12 20:47:03.165256 coreos-metadata[1504]: Nov 12 20:47:03.165 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Nov 12 20:47:03.170115 coreos-metadata[1504]: Nov 12 20:47:03.166 INFO Fetch successful Nov 12 20:47:03.170115 coreos-metadata[1504]: Nov 12 20:47:03.169 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Nov 12 20:47:03.177258 coreos-metadata[1504]: Nov 12 20:47:03.175 INFO Fetch failed with 404: resource not found Nov 12 20:47:03.177258 coreos-metadata[1504]: Nov 12 20:47:03.175 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Nov 12 20:47:03.177711 coreos-metadata[1504]: Nov 12 20:47:03.177 INFO Fetch failed with 404: resource not found Nov 12 20:47:03.177711 coreos-metadata[1504]: Nov 12 20:47:03.177 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Nov 12 20:47:03.183731 coreos-metadata[1504]: Nov 12 20:47:03.182 INFO Fetch successful Nov 12 20:47:03.189825 unknown[1504]: wrote ssh authorized keys file for user: core Nov 12 20:47:03.224594 locksmithd[1491]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:47:03.252813 polkitd[1522]: Started polkitd version 121 Nov 12 20:47:03.271639 polkitd[1522]: Loading rules from directory /etc/polkit-1/rules.d Nov 12 20:47:03.280535 update-ssh-keys[1525]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:47:03.281770 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 20:47:03.286832 polkitd[1522]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 12 20:47:03.296466 systemd[1]: Finished sshkeys.service. Nov 12 20:47:03.309865 polkitd[1522]: Finished loading, compiling and executing 2 rules Nov 12 20:47:03.311127 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 12 20:47:03.311343 systemd[1]: Started polkit.service - Authorization Manager. Nov 12 20:47:03.317990 polkitd[1522]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 12 20:47:03.403353 systemd-hostnamed[1487]: Hostname set to (transient) Nov 12 20:47:03.404878 systemd-resolved[1320]: System hostname changed to 'ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal'. Nov 12 20:47:03.605439 containerd[1464]: time="2024-11-12T20:47:03.602132581Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:47:03.660193 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:47:03.668043 sshd_keygen[1458]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:47:03.751964 containerd[1464]: time="2024-11-12T20:47:03.751709451Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:03.767359 containerd[1464]: time="2024-11-12T20:47:03.767288548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:03.767359 containerd[1464]: time="2024-11-12T20:47:03.767354422Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:47:03.767561 containerd[1464]: time="2024-11-12T20:47:03.767381481Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:47:03.770415 containerd[1464]: time="2024-11-12T20:47:03.767868770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:47:03.770415 containerd[1464]: time="2024-11-12T20:47:03.767940374Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:03.770415 containerd[1464]: time="2024-11-12T20:47:03.768078243Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:03.770415 containerd[1464]: time="2024-11-12T20:47:03.768125300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:03.770871 containerd[1464]: time="2024-11-12T20:47:03.770829586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:03.770952 containerd[1464]: time="2024-11-12T20:47:03.770891254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:03.770952 containerd[1464]: time="2024-11-12T20:47:03.770920686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:03.770952 containerd[1464]: time="2024-11-12T20:47:03.770939774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:03.771465 containerd[1464]: time="2024-11-12T20:47:03.771431595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:03.771922 containerd[1464]: time="2024-11-12T20:47:03.771867277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:47:03.774085 containerd[1464]: time="2024-11-12T20:47:03.774014896Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:47:03.774085 containerd[1464]: time="2024-11-12T20:47:03.774078744Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:47:03.774307 containerd[1464]: time="2024-11-12T20:47:03.774260989Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:47:03.777618 containerd[1464]: time="2024-11-12T20:47:03.775769971Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:47:03.788968 containerd[1464]: time="2024-11-12T20:47:03.788915782Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:47:03.789275 containerd[1464]: time="2024-11-12T20:47:03.789227529Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:47:03.792984 containerd[1464]: time="2024-11-12T20:47:03.792950034Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:47:03.794231 containerd[1464]: time="2024-11-12T20:47:03.793168136Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:47:03.794231 containerd[1464]: time="2024-11-12T20:47:03.793207607Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:47:03.794231 containerd[1464]: time="2024-11-12T20:47:03.793464231Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:47:03.795751 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:47:03.796807 containerd[1464]: time="2024-11-12T20:47:03.796305891Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:47:03.796807 containerd[1464]: time="2024-11-12T20:47:03.796507642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:47:03.796807 containerd[1464]: time="2024-11-12T20:47:03.796535530Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:47:03.796807 containerd[1464]: time="2024-11-12T20:47:03.796558598Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:47:03.796807 containerd[1464]: time="2024-11-12T20:47:03.796584625Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.796807 containerd[1464]: time="2024-11-12T20:47:03.796608693Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.796807 containerd[1464]: time="2024-11-12T20:47:03.796630392Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.797279 containerd[1464]: time="2024-11-12T20:47:03.797250211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.797526 containerd[1464]: time="2024-11-12T20:47:03.797393905Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.797526 containerd[1464]: time="2024-11-12T20:47:03.797424544Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.797526 containerd[1464]: time="2024-11-12T20:47:03.797448190Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.797526 containerd[1464]: time="2024-11-12T20:47:03.797469947Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:47:03.797526 containerd[1464]: time="2024-11-12T20:47:03.797504866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797818097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797849267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797875594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797898939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797921235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797941135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797964939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.797987345Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.798012431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.798033310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.798055741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.798098209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.798126298Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.798169264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.798446 containerd[1464]: time="2024-11-12T20:47:03.798191204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.798211044Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799059608Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799697867Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799732683Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799758427Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799776833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799801048Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799818787Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:47:03.800365 containerd[1464]: time="2024-11-12T20:47:03.799838420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:47:03.804283 containerd[1464]: time="2024-11-12T20:47:03.801412674Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:47:03.806207 containerd[1464]: time="2024-11-12T20:47:03.804796906Z" level=info msg="Connect containerd service" Nov 12 20:47:03.806207 containerd[1464]: time="2024-11-12T20:47:03.804938286Z" level=info msg="using legacy CRI server" Nov 12 20:47:03.806207 containerd[1464]: time="2024-11-12T20:47:03.804956121Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:47:03.806207 containerd[1464]: time="2024-11-12T20:47:03.805246719Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:47:03.813681 containerd[1464]: time="2024-11-12T20:47:03.812408628Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:47:03.815260 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:47:03.816725 containerd[1464]: time="2024-11-12T20:47:03.815713973Z" level=info msg="Start subscribing containerd event" Nov 12 20:47:03.816725 containerd[1464]: time="2024-11-12T20:47:03.815813344Z" level=info msg="Start recovering state" Nov 12 20:47:03.816725 containerd[1464]: time="2024-11-12T20:47:03.815927303Z" level=info msg="Start event monitor" Nov 12 20:47:03.816725 containerd[1464]: time="2024-11-12T20:47:03.815957780Z" level=info msg="Start snapshots syncer" Nov 12 20:47:03.816725 containerd[1464]: time="2024-11-12T20:47:03.815973138Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:47:03.816725 containerd[1464]: time="2024-11-12T20:47:03.815997418Z" level=info msg="Start streaming server" Nov 12 20:47:03.823849 containerd[1464]: time="2024-11-12T20:47:03.822118361Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:47:03.823849 containerd[1464]: time="2024-11-12T20:47:03.822303507Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:47:03.823849 containerd[1464]: time="2024-11-12T20:47:03.823778821Z" level=info msg="containerd successfully booted in 0.225425s" Nov 12 20:47:03.833810 systemd[1]: Started sshd@0-10.128.0.68:22-139.178.89.65:50844.service - OpenSSH per-connection server daemon (139.178.89.65:50844). Nov 12 20:47:03.846780 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:47:03.895285 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:47:03.896743 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:47:03.919317 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:47:03.979034 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:47:04.000718 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:47:04.019215 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:47:04.031460 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:47:04.219523 tar[1462]: linux-amd64/LICENSE Nov 12 20:47:04.220480 tar[1462]: linux-amd64/README.md Nov 12 20:47:04.246178 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:47:04.278675 sshd[1548]: Accepted publickey for core from 139.178.89.65 port 50844 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:47:04.286412 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:04.316683 systemd-logind[1454]: New session 1 of user core. Nov 12 20:47:04.317325 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:47:04.322449 instance-setup[1505]: INFO Running google_set_multiqueue. Nov 12 20:47:04.336962 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:47:04.352785 instance-setup[1505]: INFO Set channels for eth0 to 2. Nov 12 20:47:04.356052 instance-setup[1505]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Nov 12 20:47:04.358190 instance-setup[1505]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Nov 12 20:47:04.358595 instance-setup[1505]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Nov 12 20:47:04.360866 instance-setup[1505]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Nov 12 20:47:04.361825 instance-setup[1505]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Nov 12 20:47:04.364230 instance-setup[1505]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Nov 12 20:47:04.364290 instance-setup[1505]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Nov 12 20:47:04.367454 instance-setup[1505]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Nov 12 20:47:04.382958 instance-setup[1505]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 12 20:47:04.385620 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:47:04.390854 instance-setup[1505]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Nov 12 20:47:04.393248 instance-setup[1505]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Nov 12 20:47:04.393304 instance-setup[1505]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Nov 12 20:47:04.412023 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:47:04.431256 init.sh[1501]: + /usr/bin/google_metadata_script_runner --script-type startup Nov 12 20:47:04.452527 (systemd)[1593]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:47:04.686426 startup-script[1594]: INFO Starting startup scripts. Nov 12 20:47:04.693909 systemd[1593]: Queued start job for default target default.target. Nov 12 20:47:04.697594 startup-script[1594]: INFO No startup scripts found in metadata. Nov 12 20:47:04.697667 startup-script[1594]: INFO Finished running startup scripts. Nov 12 20:47:04.699757 systemd[1593]: Created slice app.slice - User Application Slice. Nov 12 20:47:04.699953 systemd[1593]: Reached target paths.target - Paths. Nov 12 20:47:04.699983 systemd[1593]: Reached target timers.target - Timers. Nov 12 20:47:04.705293 systemd[1593]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:47:04.728490 init.sh[1501]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Nov 12 20:47:04.728724 init.sh[1501]: + daemon_pids=() Nov 12 20:47:04.728877 init.sh[1501]: + for d in accounts clock_skew network Nov 12 20:47:04.730187 init.sh[1501]: + daemon_pids+=($!) Nov 12 20:47:04.730187 init.sh[1501]: + for d in accounts clock_skew network Nov 12 20:47:04.730187 init.sh[1501]: + daemon_pids+=($!) Nov 12 20:47:04.730187 init.sh[1501]: + for d in accounts clock_skew network Nov 12 20:47:04.730187 init.sh[1501]: + daemon_pids+=($!) Nov 12 20:47:04.730187 init.sh[1501]: + NOTIFY_SOCKET=/run/systemd/notify Nov 12 20:47:04.730187 init.sh[1501]: + /usr/bin/systemd-notify --ready Nov 12 20:47:04.730800 init.sh[1603]: + /usr/bin/google_accounts_daemon Nov 12 20:47:04.731558 init.sh[1604]: + /usr/bin/google_clock_skew_daemon Nov 12 20:47:04.732672 init.sh[1605]: + /usr/bin/google_network_daemon Nov 12 20:47:04.734744 systemd[1593]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:47:04.736052 systemd[1593]: Reached target sockets.target - Sockets. Nov 12 20:47:04.736085 systemd[1593]: Reached target basic.target - Basic System. Nov 12 20:47:04.736170 systemd[1593]: Reached target default.target - Main User Target. Nov 12 20:47:04.736226 systemd[1593]: Startup finished in 268ms. Nov 12 20:47:04.737082 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:47:04.755952 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:47:04.766355 systemd[1]: Started oem-gce.service - GCE Linux Agent. Nov 12 20:47:04.787797 init.sh[1501]: + wait -n 1603 1604 1605 Nov 12 20:47:05.022118 systemd[1]: Started sshd@1-10.128.0.68:22-139.178.89.65:50860.service - OpenSSH per-connection server daemon (139.178.89.65:50860). Nov 12 20:47:05.180047 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:44%2]:123 Nov 12 20:47:05.180724 ntpd[1433]: 12 Nov 20:47:05 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:44%2]:123 Nov 12 20:47:05.219535 google-clock-skew[1604]: INFO Starting Google Clock Skew daemon. Nov 12 20:47:05.231697 google-clock-skew[1604]: INFO Clock drift token has changed: 0. Nov 12 20:47:05.295160 google-networking[1605]: INFO Starting Google Networking daemon. Nov 12 20:47:05.342911 groupadd[1620]: group added to /etc/group: name=google-sudoers, GID=1000 Nov 12 20:47:05.348517 groupadd[1620]: group added to /etc/gshadow: name=google-sudoers Nov 12 20:47:05.349487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:05.361733 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:47:05.366578 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:47:05.370793 sshd[1611]: Accepted publickey for core from 139.178.89.65 port 50860 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:47:05.373676 systemd[1]: Startup finished in 1.067s (kernel) + 13.216s (initrd) + 9.456s (userspace) = 23.740s. Nov 12 20:47:05.373801 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:05.392795 systemd-logind[1454]: New session 2 of user core. Nov 12 20:47:05.397387 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:47:05.432694 groupadd[1620]: new group: name=google-sudoers, GID=1000 Nov 12 20:47:05.467527 google-accounts[1603]: INFO Starting Google Accounts daemon. Nov 12 20:47:05.481742 google-accounts[1603]: WARNING OS Login not installed. Nov 12 20:47:05.484073 google-accounts[1603]: INFO Creating a new user account for 0. Nov 12 20:47:05.490382 init.sh[1640]: useradd: invalid user name '0': use --badname to ignore Nov 12 20:47:05.490762 google-accounts[1603]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Nov 12 20:47:05.587463 sshd[1611]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:05.593492 systemd[1]: sshd@1-10.128.0.68:22-139.178.89.65:50860.service: Deactivated successfully. Nov 12 20:47:05.596457 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 20:47:05.598704 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Nov 12 20:47:05.600977 systemd-logind[1454]: Removed session 2. Nov 12 20:47:05.648204 systemd[1]: Started sshd@2-10.128.0.68:22-139.178.89.65:50868.service - OpenSSH per-connection server daemon (139.178.89.65:50868). Nov 12 20:47:06.000097 systemd-resolved[1320]: Clock change detected. Flushing caches. Nov 12 20:47:06.001406 google-clock-skew[1604]: INFO Synced system time with hardware clock. Nov 12 20:47:06.204327 sshd[1650]: Accepted publickey for core from 139.178.89.65 port 50868 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:47:06.206524 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:06.215337 systemd-logind[1454]: New session 3 of user core. Nov 12 20:47:06.218841 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:47:06.417840 sshd[1650]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:06.423276 systemd[1]: sshd@2-10.128.0.68:22-139.178.89.65:50868.service: Deactivated successfully. Nov 12 20:47:06.425939 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 20:47:06.428683 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Nov 12 20:47:06.430823 systemd-logind[1454]: Removed session 3. Nov 12 20:47:06.468219 systemd[1]: Started sshd@3-10.128.0.68:22-139.178.89.65:50874.service - OpenSSH per-connection server daemon (139.178.89.65:50874). Nov 12 20:47:06.539577 kubelet[1627]: E1112 20:47:06.539480 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:47:06.541766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:47:06.542004 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:47:06.542407 systemd[1]: kubelet.service: Consumed 1.292s CPU time. Nov 12 20:47:06.773225 sshd[1659]: Accepted publickey for core from 139.178.89.65 port 50874 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:47:06.775263 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:06.781856 systemd-logind[1454]: New session 4 of user core. Nov 12 20:47:06.789818 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:47:06.990182 sshd[1659]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:06.994439 systemd[1]: sshd@3-10.128.0.68:22-139.178.89.65:50874.service: Deactivated successfully. Nov 12 20:47:06.996787 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:47:06.998639 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:47:07.000250 systemd-logind[1454]: Removed session 4. Nov 12 20:47:07.046394 systemd[1]: Started sshd@4-10.128.0.68:22-139.178.89.65:50888.service - OpenSSH per-connection server daemon (139.178.89.65:50888). Nov 12 20:47:07.345365 sshd[1667]: Accepted publickey for core from 139.178.89.65 port 50888 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:47:07.347209 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:07.353516 systemd-logind[1454]: New session 5 of user core. Nov 12 20:47:07.360857 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:47:07.544334 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:47:07.545020 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:07.562354 sudo[1670]: pam_unix(sudo:session): session closed for user root Nov 12 20:47:07.606321 sshd[1667]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:07.611496 systemd[1]: sshd@4-10.128.0.68:22-139.178.89.65:50888.service: Deactivated successfully. Nov 12 20:47:07.614018 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:47:07.615869 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:47:07.617404 systemd-logind[1454]: Removed session 5. Nov 12 20:47:07.662266 systemd[1]: Started sshd@5-10.128.0.68:22-139.178.89.65:47556.service - OpenSSH per-connection server daemon (139.178.89.65:47556). Nov 12 20:47:07.946424 sshd[1675]: Accepted publickey for core from 139.178.89.65 port 47556 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:47:07.948379 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:07.954965 systemd-logind[1454]: New session 6 of user core. Nov 12 20:47:07.965916 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:47:08.125686 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:47:08.126187 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:08.131753 sudo[1679]: pam_unix(sudo:session): session closed for user root Nov 12 20:47:08.147084 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:47:08.147769 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:08.171049 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:47:08.174294 auditctl[1682]: No rules Nov 12 20:47:08.175955 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:47:08.176305 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:47:08.179012 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:47:08.249410 augenrules[1700]: No rules Nov 12 20:47:08.250825 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:47:08.253087 sudo[1678]: pam_unix(sudo:session): session closed for user root Nov 12 20:47:08.296672 sshd[1675]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:08.302602 systemd[1]: sshd@5-10.128.0.68:22-139.178.89.65:47556.service: Deactivated successfully. Nov 12 20:47:08.305103 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:47:08.306089 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:47:08.307708 systemd-logind[1454]: Removed session 6. Nov 12 20:47:08.360002 systemd[1]: Started sshd@6-10.128.0.68:22-139.178.89.65:47560.service - OpenSSH per-connection server daemon (139.178.89.65:47560). Nov 12 20:47:08.644044 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 47560 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:47:08.646071 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:47:08.652407 systemd-logind[1454]: New session 7 of user core. Nov 12 20:47:08.661860 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:47:08.825131 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:47:08.825674 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:47:09.276989 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:47:09.289278 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:47:09.736855 dockerd[1726]: time="2024-11-12T20:47:09.736681615Z" level=info msg="Starting up" Nov 12 20:47:10.032029 dockerd[1726]: time="2024-11-12T20:47:10.031574594Z" level=info msg="Loading containers: start." Nov 12 20:47:10.186587 kernel: Initializing XFRM netlink socket Nov 12 20:47:10.298346 systemd-networkd[1376]: docker0: Link UP Nov 12 20:47:10.319868 dockerd[1726]: time="2024-11-12T20:47:10.319798557Z" level=info msg="Loading containers: done." Nov 12 20:47:10.344597 dockerd[1726]: time="2024-11-12T20:47:10.344508296Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:47:10.344938 dockerd[1726]: time="2024-11-12T20:47:10.344682275Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:47:10.344938 dockerd[1726]: time="2024-11-12T20:47:10.344838007Z" level=info msg="Daemon has completed initialization" Nov 12 20:47:10.388595 dockerd[1726]: time="2024-11-12T20:47:10.388435878Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:47:10.389021 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:47:11.324875 containerd[1464]: time="2024-11-12T20:47:11.324818831Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\"" Nov 12 20:47:11.860651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount184491200.mount: Deactivated successfully. Nov 12 20:47:13.414626 containerd[1464]: time="2024-11-12T20:47:13.414541515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:13.416288 containerd[1464]: time="2024-11-12T20:47:13.416228995Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.2: active requests=0, bytes read=27982216" Nov 12 20:47:13.417951 containerd[1464]: time="2024-11-12T20:47:13.417854136Z" level=info msg="ImageCreate event name:\"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:13.422260 containerd[1464]: time="2024-11-12T20:47:13.422191080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:13.423964 containerd[1464]: time="2024-11-12T20:47:13.423698010Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.2\" with image id \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0\", size \"27972388\" in 2.098823674s" Nov 12 20:47:13.423964 containerd[1464]: time="2024-11-12T20:47:13.423751885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.2\" returns image reference \"sha256:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173\"" Nov 12 20:47:13.426822 containerd[1464]: time="2024-11-12T20:47:13.426718545Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\"" Nov 12 20:47:14.961278 containerd[1464]: time="2024-11-12T20:47:14.961202296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:14.962950 containerd[1464]: time="2024-11-12T20:47:14.962877687Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.2: active requests=0, bytes read=24703856" Nov 12 20:47:14.964378 containerd[1464]: time="2024-11-12T20:47:14.964305387Z" level=info msg="ImageCreate event name:\"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:14.968462 containerd[1464]: time="2024-11-12T20:47:14.968385048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:14.970066 containerd[1464]: time="2024-11-12T20:47:14.969853055Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.2\" with image id \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752\", size \"26147288\" in 1.543089692s" Nov 12 20:47:14.970066 containerd[1464]: time="2024-11-12T20:47:14.969903258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.2\" returns image reference \"sha256:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503\"" Nov 12 20:47:14.971004 containerd[1464]: time="2024-11-12T20:47:14.970955652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\"" Nov 12 20:47:16.222667 containerd[1464]: time="2024-11-12T20:47:16.222588071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:16.224233 containerd[1464]: time="2024-11-12T20:47:16.224172492Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.2: active requests=0, bytes read=18659522" Nov 12 20:47:16.225746 containerd[1464]: time="2024-11-12T20:47:16.225661520Z" level=info msg="ImageCreate event name:\"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:16.229995 containerd[1464]: time="2024-11-12T20:47:16.229897096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:16.231752 containerd[1464]: time="2024-11-12T20:47:16.231542078Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.2\" with image id \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282\", size \"20102990\" in 1.260402132s" Nov 12 20:47:16.231752 containerd[1464]: time="2024-11-12T20:47:16.231620781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.2\" returns image reference \"sha256:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856\"" Nov 12 20:47:16.232843 containerd[1464]: time="2024-11-12T20:47:16.232418354Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\"" Nov 12 20:47:16.792349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:47:16.797872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:17.060476 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:17.067151 (kubelet)[1937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:47:17.122829 kubelet[1937]: E1112 20:47:17.122755 1937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:47:17.127277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:47:17.127533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:47:24.580633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632967851.mount: Deactivated successfully. Nov 12 20:47:25.201766 containerd[1464]: time="2024-11-12T20:47:25.201694235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:25.203282 containerd[1464]: time="2024-11-12T20:47:25.203202185Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.2: active requests=0, bytes read=30228709" Nov 12 20:47:25.205259 containerd[1464]: time="2024-11-12T20:47:25.205175425Z" level=info msg="ImageCreate event name:\"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:25.208520 containerd[1464]: time="2024-11-12T20:47:25.208443554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:25.209730 containerd[1464]: time="2024-11-12T20:47:25.209507273Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.2\" with image id \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\", repo tag \"registry.k8s.io/kube-proxy:v1.31.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe\", size \"30225833\" in 8.977045206s" Nov 12 20:47:25.209730 containerd[1464]: time="2024-11-12T20:47:25.209583204Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.2\" returns image reference \"sha256:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38\"" Nov 12 20:47:25.210599 containerd[1464]: time="2024-11-12T20:47:25.210382400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:47:25.700749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4143878028.mount: Deactivated successfully. Nov 12 20:47:26.786187 containerd[1464]: time="2024-11-12T20:47:26.786088198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:26.787981 containerd[1464]: time="2024-11-12T20:47:26.787902829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Nov 12 20:47:26.789709 containerd[1464]: time="2024-11-12T20:47:26.789636809Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:26.793796 containerd[1464]: time="2024-11-12T20:47:26.793727563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:26.799588 containerd[1464]: time="2024-11-12T20:47:26.798626871Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.588192843s" Nov 12 20:47:26.799588 containerd[1464]: time="2024-11-12T20:47:26.798692636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:47:26.801427 containerd[1464]: time="2024-11-12T20:47:26.801362537Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 12 20:47:27.377903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:47:27.390916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:28.114313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:28.120985 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:47:28.169419 kubelet[2004]: E1112 20:47:28.169255 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:47:28.171898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:47:28.172116 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:47:28.360503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307544617.mount: Deactivated successfully. Nov 12 20:47:28.369145 containerd[1464]: time="2024-11-12T20:47:28.368946862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:28.371174 containerd[1464]: time="2024-11-12T20:47:28.371097468Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Nov 12 20:47:28.372898 containerd[1464]: time="2024-11-12T20:47:28.372829475Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:28.376184 containerd[1464]: time="2024-11-12T20:47:28.376110410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:28.377791 containerd[1464]: time="2024-11-12T20:47:28.377158955Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.575601998s" Nov 12 20:47:28.377791 containerd[1464]: time="2024-11-12T20:47:28.377210038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 12 20:47:28.378102 containerd[1464]: time="2024-11-12T20:47:28.378066640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Nov 12 20:47:28.814428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount664098811.mount: Deactivated successfully. Nov 12 20:47:30.984266 containerd[1464]: time="2024-11-12T20:47:30.984193944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:30.986013 containerd[1464]: time="2024-11-12T20:47:30.985943349Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786252" Nov 12 20:47:30.987403 containerd[1464]: time="2024-11-12T20:47:30.987327568Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:30.991242 containerd[1464]: time="2024-11-12T20:47:30.991166699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:30.993063 containerd[1464]: time="2024-11-12T20:47:30.992884410Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.614768498s" Nov 12 20:47:30.993063 containerd[1464]: time="2024-11-12T20:47:30.992935091Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Nov 12 20:47:33.695710 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 12 20:47:33.713342 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:33.720039 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:33.762973 systemd[1]: Reloading requested from client PID 2098 ('systemctl') (unit session-7.scope)... Nov 12 20:47:33.762995 systemd[1]: Reloading... Nov 12 20:47:33.930597 zram_generator::config[2138]: No configuration found. Nov 12 20:47:34.093917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:34.201755 systemd[1]: Reloading finished in 438 ms. Nov 12 20:47:34.266709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:34.274986 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:34.276803 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:47:34.277098 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:34.283089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:34.504733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:34.517154 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:47:34.577282 kubelet[2191]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:47:34.577282 kubelet[2191]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:47:34.577282 kubelet[2191]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:47:34.579376 kubelet[2191]: I1112 20:47:34.579286 2191 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:47:35.358650 kubelet[2191]: I1112 20:47:35.358596 2191 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:47:35.358650 kubelet[2191]: I1112 20:47:35.358640 2191 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:47:35.359038 kubelet[2191]: I1112 20:47:35.359002 2191 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:47:35.394528 kubelet[2191]: E1112 20:47:35.394481 2191 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.68:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:35.400588 kubelet[2191]: I1112 20:47:35.399090 2191 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:47:35.413982 kubelet[2191]: E1112 20:47:35.413909 2191 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:47:35.414214 kubelet[2191]: I1112 20:47:35.413994 2191 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:47:35.420614 kubelet[2191]: I1112 20:47:35.420573 2191 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:47:35.420833 kubelet[2191]: I1112 20:47:35.420761 2191 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:47:35.421024 kubelet[2191]: I1112 20:47:35.420987 2191 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:47:35.421293 kubelet[2191]: I1112 20:47:35.421026 2191 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:47:35.421293 kubelet[2191]: I1112 20:47:35.421290 2191 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:47:35.421528 kubelet[2191]: I1112 20:47:35.421307 2191 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:47:35.421528 kubelet[2191]: I1112 20:47:35.421473 2191 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:47:35.425845 kubelet[2191]: I1112 20:47:35.425788 2191 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:47:35.425845 kubelet[2191]: I1112 20:47:35.425833 2191 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:47:35.426007 kubelet[2191]: I1112 20:47:35.425891 2191 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:47:35.426007 kubelet[2191]: I1112 20:47:35.425922 2191 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:47:35.429305 kubelet[2191]: W1112 20:47:35.429238 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:35.430655 kubelet[2191]: E1112 20:47:35.430455 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:35.437975 kubelet[2191]: W1112 20:47:35.437734 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:35.438518 kubelet[2191]: E1112 20:47:35.438221 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:35.438518 kubelet[2191]: I1112 20:47:35.438377 2191 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:47:35.441464 kubelet[2191]: I1112 20:47:35.441430 2191 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:47:35.444338 kubelet[2191]: W1112 20:47:35.442907 2191 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:47:35.444633 kubelet[2191]: I1112 20:47:35.444595 2191 server.go:1269] "Started kubelet" Nov 12 20:47:35.446215 kubelet[2191]: I1112 20:47:35.446154 2191 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:47:35.447711 kubelet[2191]: I1112 20:47:35.447545 2191 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:47:35.451361 kubelet[2191]: I1112 20:47:35.450814 2191 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:47:35.453540 kubelet[2191]: I1112 20:47:35.453468 2191 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:47:35.453955 kubelet[2191]: I1112 20:47:35.453928 2191 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:47:35.458832 kubelet[2191]: E1112 20:47:35.454372 2191 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.68:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.68:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal.1807538f031760f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,UID:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,},FirstTimestamp:2024-11-12 20:47:35.444537587 +0000 UTC m=+0.921810144,LastTimestamp:2024-11-12 20:47:35.444537587 +0000 UTC m=+0.921810144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,}" Nov 12 20:47:35.460336 kubelet[2191]: I1112 20:47:35.460306 2191 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:47:35.463648 kubelet[2191]: E1112 20:47:35.463403 2191 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" not found" Nov 12 20:47:35.463648 kubelet[2191]: I1112 20:47:35.463468 2191 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:47:35.464079 kubelet[2191]: I1112 20:47:35.464054 2191 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:47:35.464265 kubelet[2191]: I1112 20:47:35.464249 2191 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:47:35.466092 kubelet[2191]: W1112 20:47:35.465205 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:35.466092 kubelet[2191]: E1112 20:47:35.465284 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:35.466092 kubelet[2191]: E1112 20:47:35.465396 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.68:6443: connect: connection refused" interval="200ms" Nov 12 20:47:35.466092 kubelet[2191]: E1112 20:47:35.465528 2191 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:47:35.466092 kubelet[2191]: I1112 20:47:35.465741 2191 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:47:35.466092 kubelet[2191]: I1112 20:47:35.465862 2191 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:47:35.468353 kubelet[2191]: I1112 20:47:35.468325 2191 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:47:35.489382 kubelet[2191]: I1112 20:47:35.489292 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:47:35.490886 kubelet[2191]: I1112 20:47:35.490836 2191 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:47:35.490886 kubelet[2191]: I1112 20:47:35.490880 2191 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:47:35.491036 kubelet[2191]: I1112 20:47:35.490904 2191 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:47:35.491036 kubelet[2191]: E1112 20:47:35.490967 2191 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:47:35.502932 kubelet[2191]: W1112 20:47:35.502884 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:35.503093 kubelet[2191]: E1112 20:47:35.502934 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:35.507449 kubelet[2191]: I1112 20:47:35.507373 2191 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:47:35.507449 kubelet[2191]: I1112 20:47:35.507394 2191 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:47:35.507449 kubelet[2191]: I1112 20:47:35.507432 2191 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:47:35.510778 kubelet[2191]: I1112 20:47:35.510733 2191 policy_none.go:49] "None policy: Start" Nov 12 20:47:35.511730 kubelet[2191]: I1112 20:47:35.511696 2191 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:47:35.511730 kubelet[2191]: I1112 20:47:35.511732 2191 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:47:35.523437 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:47:35.539499 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:47:35.544647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:47:35.558377 kubelet[2191]: I1112 20:47:35.558334 2191 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:47:35.558893 kubelet[2191]: I1112 20:47:35.558704 2191 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:47:35.558893 kubelet[2191]: I1112 20:47:35.558724 2191 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:47:35.559517 kubelet[2191]: I1112 20:47:35.559456 2191 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:47:35.563246 kubelet[2191]: E1112 20:47:35.563078 2191 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" not found" Nov 12 20:47:35.613952 systemd[1]: Created slice kubepods-burstable-pod01668730e6c520d4a8286fd3d69e9b64.slice - libcontainer container kubepods-burstable-pod01668730e6c520d4a8286fd3d69e9b64.slice. Nov 12 20:47:35.626727 systemd[1]: Created slice kubepods-burstable-pod29f5fa5060c045239bab032efa3a5fa6.slice - libcontainer container kubepods-burstable-pod29f5fa5060c045239bab032efa3a5fa6.slice. Nov 12 20:47:35.641112 systemd[1]: Created slice kubepods-burstable-pod728b672648ee1edcc992e22f8d9d1413.slice - libcontainer container kubepods-burstable-pod728b672648ee1edcc992e22f8d9d1413.slice. Nov 12 20:47:35.664995 kubelet[2191]: I1112 20:47:35.664881 2191 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.665593 kubelet[2191]: I1112 20:47:35.665021 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/728b672648ee1edcc992e22f8d9d1413-kubeconfig\") pod \"kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"728b672648ee1edcc992e22f8d9d1413\") " pod="kube-system/kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.665593 kubelet[2191]: I1112 20:47:35.665057 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01668730e6c520d4a8286fd3d69e9b64-k8s-certs\") pod \"kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"01668730e6c520d4a8286fd3d69e9b64\") " pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.665593 kubelet[2191]: I1112 20:47:35.665092 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01668730e6c520d4a8286fd3d69e9b64-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"01668730e6c520d4a8286fd3d69e9b64\") " pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.665593 kubelet[2191]: I1112 20:47:35.665122 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-ca-certs\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.666044 kubelet[2191]: I1112 20:47:35.665178 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.666044 kubelet[2191]: I1112 20:47:35.665210 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.666044 kubelet[2191]: I1112 20:47:35.665240 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01668730e6c520d4a8286fd3d69e9b64-ca-certs\") pod \"kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"01668730e6c520d4a8286fd3d69e9b64\") " pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.666044 kubelet[2191]: I1112 20:47:35.665267 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.666243 kubelet[2191]: I1112 20:47:35.665296 2191 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.666243 kubelet[2191]: E1112 20:47:35.665912 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.68:6443: connect: connection refused" interval="400ms" Nov 12 20:47:35.666243 kubelet[2191]: E1112 20:47:35.666024 2191 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.68:6443/api/v1/nodes\": dial tcp 10.128.0.68:6443: connect: connection refused" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.874280 kubelet[2191]: I1112 20:47:35.874135 2191 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.874687 kubelet[2191]: E1112 20:47:35.874631 2191 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.68:6443/api/v1/nodes\": dial tcp 10.128.0.68:6443: connect: connection refused" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:35.924276 containerd[1464]: time="2024-11-12T20:47:35.924217318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,Uid:01668730e6c520d4a8286fd3d69e9b64,Namespace:kube-system,Attempt:0,}" Nov 12 20:47:35.938320 containerd[1464]: time="2024-11-12T20:47:35.938251387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,Uid:29f5fa5060c045239bab032efa3a5fa6,Namespace:kube-system,Attempt:0,}" Nov 12 20:47:35.945683 containerd[1464]: time="2024-11-12T20:47:35.945254241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,Uid:728b672648ee1edcc992e22f8d9d1413,Namespace:kube-system,Attempt:0,}" Nov 12 20:47:36.067134 kubelet[2191]: E1112 20:47:36.067071 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.68:6443: connect: connection refused" interval="800ms" Nov 12 20:47:36.280877 kubelet[2191]: I1112 20:47:36.280711 2191 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:36.281181 kubelet[2191]: E1112 20:47:36.281106 2191 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.68:6443/api/v1/nodes\": dial tcp 10.128.0.68:6443: connect: connection refused" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:36.397964 kubelet[2191]: W1112 20:47:36.397873 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:36.397964 kubelet[2191]: E1112 20:47:36.397967 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.68:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:36.437154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557725839.mount: Deactivated successfully. Nov 12 20:47:36.447808 containerd[1464]: time="2024-11-12T20:47:36.447154619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:47:36.448789 containerd[1464]: time="2024-11-12T20:47:36.448715707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Nov 12 20:47:36.450169 containerd[1464]: time="2024-11-12T20:47:36.450126861Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:47:36.451576 containerd[1464]: time="2024-11-12T20:47:36.451500477Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:47:36.453039 containerd[1464]: time="2024-11-12T20:47:36.452979034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:47:36.455580 containerd[1464]: time="2024-11-12T20:47:36.454637344Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:47:36.457944 containerd[1464]: time="2024-11-12T20:47:36.457882401Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:47:36.459229 containerd[1464]: time="2024-11-12T20:47:36.459151153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:47:36.460751 containerd[1464]: time="2024-11-12T20:47:36.460429564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.062007ms" Nov 12 20:47:36.466883 containerd[1464]: time="2024-11-12T20:47:36.466832059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.467532ms" Nov 12 20:47:36.467909 containerd[1464]: time="2024-11-12T20:47:36.467858208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.532814ms" Nov 12 20:47:36.510461 kubelet[2191]: W1112 20:47:36.503973 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:36.510461 kubelet[2191]: E1112 20:47:36.504075 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.68:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:36.581358 kubelet[2191]: W1112 20:47:36.581162 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:36.582624 kubelet[2191]: E1112 20:47:36.582582 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.68:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:36.666719 containerd[1464]: time="2024-11-12T20:47:36.666366833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:36.666719 containerd[1464]: time="2024-11-12T20:47:36.666453025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:36.666719 containerd[1464]: time="2024-11-12T20:47:36.666471989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:36.666719 containerd[1464]: time="2024-11-12T20:47:36.666615153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:36.671067 containerd[1464]: time="2024-11-12T20:47:36.669993706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:36.671067 containerd[1464]: time="2024-11-12T20:47:36.670082326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:36.671067 containerd[1464]: time="2024-11-12T20:47:36.670117861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:36.671067 containerd[1464]: time="2024-11-12T20:47:36.670297793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:36.674439 containerd[1464]: time="2024-11-12T20:47:36.673802981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:36.674439 containerd[1464]: time="2024-11-12T20:47:36.673894200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:36.674439 containerd[1464]: time="2024-11-12T20:47:36.673920486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:36.674439 containerd[1464]: time="2024-11-12T20:47:36.674057867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:36.709352 systemd[1]: Started cri-containerd-f085ba26579bb2edf283a49285bb581b2b621dcc7c4b42ab78d18cef18a135f3.scope - libcontainer container f085ba26579bb2edf283a49285bb581b2b621dcc7c4b42ab78d18cef18a135f3. Nov 12 20:47:36.725074 systemd[1]: Started cri-containerd-c42d26d65daf5d42a413d231e0e762c0a9a508a5914b20d3bca771afe7bf9897.scope - libcontainer container c42d26d65daf5d42a413d231e0e762c0a9a508a5914b20d3bca771afe7bf9897. Nov 12 20:47:36.732215 systemd[1]: Started cri-containerd-ca80ca34cf3e1a481bd2413beae992db4958b18a8e633d70094c207b2f323730.scope - libcontainer container ca80ca34cf3e1a481bd2413beae992db4958b18a8e633d70094c207b2f323730. Nov 12 20:47:36.836735 containerd[1464]: time="2024-11-12T20:47:36.836357204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,Uid:728b672648ee1edcc992e22f8d9d1413,Namespace:kube-system,Attempt:0,} returns sandbox id \"c42d26d65daf5d42a413d231e0e762c0a9a508a5914b20d3bca771afe7bf9897\"" Nov 12 20:47:36.840927 kubelet[2191]: E1112 20:47:36.840231 2191 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-21291" Nov 12 20:47:36.849604 containerd[1464]: time="2024-11-12T20:47:36.849518381Z" level=info msg="CreateContainer within sandbox \"c42d26d65daf5d42a413d231e0e762c0a9a508a5914b20d3bca771afe7bf9897\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:47:36.853971 containerd[1464]: time="2024-11-12T20:47:36.853892956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,Uid:29f5fa5060c045239bab032efa3a5fa6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f085ba26579bb2edf283a49285bb581b2b621dcc7c4b42ab78d18cef18a135f3\"" Nov 12 20:47:36.854780 containerd[1464]: time="2024-11-12T20:47:36.854738449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,Uid:01668730e6c520d4a8286fd3d69e9b64,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca80ca34cf3e1a481bd2413beae992db4958b18a8e633d70094c207b2f323730\"" Nov 12 20:47:36.856631 kubelet[2191]: E1112 20:47:36.856508 2191 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flat" Nov 12 20:47:36.857143 kubelet[2191]: E1112 20:47:36.856982 2191 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-21291" Nov 12 20:47:36.858274 containerd[1464]: time="2024-11-12T20:47:36.858238133Z" level=info msg="CreateContainer within sandbox \"ca80ca34cf3e1a481bd2413beae992db4958b18a8e633d70094c207b2f323730\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:47:36.858857 containerd[1464]: time="2024-11-12T20:47:36.858717731Z" level=info msg="CreateContainer within sandbox \"f085ba26579bb2edf283a49285bb581b2b621dcc7c4b42ab78d18cef18a135f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:47:36.863865 kubelet[2191]: W1112 20:47:36.863775 2191 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.68:6443: connect: connection refused Nov 12 20:47:36.863865 kubelet[2191]: E1112 20:47:36.863849 2191 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.68:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.68:6443: connect: connection refused" logger="UnhandledError" Nov 12 20:47:36.868942 kubelet[2191]: E1112 20:47:36.868415 2191 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.68:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.68:6443: connect: connection refused" interval="1.6s" Nov 12 20:47:36.881383 containerd[1464]: time="2024-11-12T20:47:36.881321690Z" level=info msg="CreateContainer within sandbox \"c42d26d65daf5d42a413d231e0e762c0a9a508a5914b20d3bca771afe7bf9897\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc1b44add4933e7584b460b0af8f93083d1823a8de7bce23abc4263a55faa116\"" Nov 12 20:47:36.882359 containerd[1464]: time="2024-11-12T20:47:36.882324085Z" level=info msg="StartContainer for \"dc1b44add4933e7584b460b0af8f93083d1823a8de7bce23abc4263a55faa116\"" Nov 12 20:47:36.888421 containerd[1464]: time="2024-11-12T20:47:36.887976137Z" level=info msg="CreateContainer within sandbox \"f085ba26579bb2edf283a49285bb581b2b621dcc7c4b42ab78d18cef18a135f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b0c6139ba76263fb0336c99415241c3a5832c6794c9bc6179c016f67ebdfb0a3\"" Nov 12 20:47:36.888907 containerd[1464]: time="2024-11-12T20:47:36.888878530Z" level=info msg="StartContainer for \"b0c6139ba76263fb0336c99415241c3a5832c6794c9bc6179c016f67ebdfb0a3\"" Nov 12 20:47:36.889848 containerd[1464]: time="2024-11-12T20:47:36.889813232Z" level=info msg="CreateContainer within sandbox \"ca80ca34cf3e1a481bd2413beae992db4958b18a8e633d70094c207b2f323730\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f7e1c31f04b793c809107789c3812a68ab3bc68b03beb72c71f3180c38ad85d9\"" Nov 12 20:47:36.894999 containerd[1464]: time="2024-11-12T20:47:36.894961748Z" level=info msg="StartContainer for \"f7e1c31f04b793c809107789c3812a68ab3bc68b03beb72c71f3180c38ad85d9\"" Nov 12 20:47:36.960065 systemd[1]: Started cri-containerd-b0c6139ba76263fb0336c99415241c3a5832c6794c9bc6179c016f67ebdfb0a3.scope - libcontainer container b0c6139ba76263fb0336c99415241c3a5832c6794c9bc6179c016f67ebdfb0a3. Nov 12 20:47:36.962715 systemd[1]: Started cri-containerd-dc1b44add4933e7584b460b0af8f93083d1823a8de7bce23abc4263a55faa116.scope - libcontainer container dc1b44add4933e7584b460b0af8f93083d1823a8de7bce23abc4263a55faa116. Nov 12 20:47:36.972801 systemd[1]: Started cri-containerd-f7e1c31f04b793c809107789c3812a68ab3bc68b03beb72c71f3180c38ad85d9.scope - libcontainer container f7e1c31f04b793c809107789c3812a68ab3bc68b03beb72c71f3180c38ad85d9. Nov 12 20:47:37.064955 containerd[1464]: time="2024-11-12T20:47:37.064862358Z" level=info msg="StartContainer for \"f7e1c31f04b793c809107789c3812a68ab3bc68b03beb72c71f3180c38ad85d9\" returns successfully" Nov 12 20:47:37.090696 kubelet[2191]: I1112 20:47:37.089703 2191 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:37.090696 kubelet[2191]: E1112 20:47:37.090099 2191 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.68:6443/api/v1/nodes\": dial tcp 10.128.0.68:6443: connect: connection refused" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:37.100476 containerd[1464]: time="2024-11-12T20:47:37.099984023Z" level=info msg="StartContainer for \"dc1b44add4933e7584b460b0af8f93083d1823a8de7bce23abc4263a55faa116\" returns successfully" Nov 12 20:47:37.108102 containerd[1464]: time="2024-11-12T20:47:37.107963853Z" level=info msg="StartContainer for \"b0c6139ba76263fb0336c99415241c3a5832c6794c9bc6179c016f67ebdfb0a3\" returns successfully" Nov 12 20:47:38.697424 kubelet[2191]: I1112 20:47:38.697371 2191 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:40.965600 kubelet[2191]: E1112 20:47:40.965407 2191 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" not found" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:41.038345 kubelet[2191]: E1112 20:47:41.037846 2191 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal.1807538f031760f3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,UID:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,},FirstTimestamp:2024-11-12 20:47:35.444537587 +0000 UTC m=+0.921810144,LastTimestamp:2024-11-12 20:47:35.444537587 +0000 UTC m=+0.921810144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,}" Nov 12 20:47:41.062457 kubelet[2191]: I1112 20:47:41.062244 2191 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:41.062457 kubelet[2191]: E1112 20:47:41.062401 2191 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\": node \"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" not found" Nov 12 20:47:41.109173 kubelet[2191]: E1112 20:47:41.109024 2191 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal.1807538f045779cf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,UID:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,},FirstTimestamp:2024-11-12 20:47:35.465515471 +0000 UTC m=+0.942788052,LastTimestamp:2024-11-12 20:47:35.465515471 +0000 UTC m=+0.942788052,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,}" Nov 12 20:47:41.170535 kubelet[2191]: E1112 20:47:41.170389 2191 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal.1807538f06ca6069 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,UID:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,},FirstTimestamp:2024-11-12 20:47:35.506600041 +0000 UTC m=+0.983872623,LastTimestamp:2024-11-12 20:47:35.506600041 +0000 UTC m=+0.983872623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal,}" Nov 12 20:47:41.436532 kubelet[2191]: I1112 20:47:41.435003 2191 apiserver.go:52] "Watching apiserver" Nov 12 20:47:41.465216 kubelet[2191]: I1112 20:47:41.465170 2191 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:47:42.999837 systemd[1]: Reloading requested from client PID 2459 ('systemctl') (unit session-7.scope)... Nov 12 20:47:42.999859 systemd[1]: Reloading... Nov 12 20:47:43.170704 zram_generator::config[2503]: No configuration found. Nov 12 20:47:43.305180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:47:43.428132 systemd[1]: Reloading finished in 427 ms. Nov 12 20:47:43.478219 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:43.505358 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:47:43.505686 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:43.505765 systemd[1]: kubelet.service: Consumed 1.435s CPU time, 118.9M memory peak, 0B memory swap peak. Nov 12 20:47:43.514025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:47:43.757374 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:47:43.774296 (kubelet)[2547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:47:43.835584 kubelet[2547]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:47:43.835584 kubelet[2547]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:47:43.835584 kubelet[2547]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:47:43.836208 kubelet[2547]: I1112 20:47:43.835697 2547 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:47:43.853329 kubelet[2547]: I1112 20:47:43.852246 2547 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Nov 12 20:47:43.853329 kubelet[2547]: I1112 20:47:43.852284 2547 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:47:43.853329 kubelet[2547]: I1112 20:47:43.852701 2547 server.go:929] "Client rotation is on, will bootstrap in background" Nov 12 20:47:43.854962 kubelet[2547]: I1112 20:47:43.854929 2547 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:47:43.858638 kubelet[2547]: I1112 20:47:43.858421 2547 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:47:43.864403 kubelet[2547]: E1112 20:47:43.864334 2547 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 12 20:47:43.864403 kubelet[2547]: I1112 20:47:43.864390 2547 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 12 20:47:43.868135 kubelet[2547]: I1112 20:47:43.868092 2547 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:47:43.868279 kubelet[2547]: I1112 20:47:43.868260 2547 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 12 20:47:43.868516 kubelet[2547]: I1112 20:47:43.868451 2547 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:47:43.868765 kubelet[2547]: I1112 20:47:43.868496 2547 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 12 20:47:43.868932 kubelet[2547]: I1112 20:47:43.868768 2547 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:47:43.868932 kubelet[2547]: I1112 20:47:43.868788 2547 container_manager_linux.go:300] "Creating device plugin manager" Nov 12 20:47:43.868932 kubelet[2547]: I1112 20:47:43.868838 2547 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:47:43.869091 kubelet[2547]: I1112 20:47:43.868988 2547 kubelet.go:408] "Attempting to sync node with API server" Nov 12 20:47:43.869091 kubelet[2547]: I1112 20:47:43.869006 2547 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:47:43.869091 kubelet[2547]: I1112 20:47:43.869047 2547 kubelet.go:314] "Adding apiserver pod source" Nov 12 20:47:43.869091 kubelet[2547]: I1112 20:47:43.869068 2547 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:47:43.871371 kubelet[2547]: I1112 20:47:43.871348 2547 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:47:43.872204 kubelet[2547]: I1112 20:47:43.872181 2547 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:47:43.874408 kubelet[2547]: I1112 20:47:43.874386 2547 server.go:1269] "Started kubelet" Nov 12 20:47:43.885816 kubelet[2547]: I1112 20:47:43.885785 2547 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:47:43.887418 kubelet[2547]: I1112 20:47:43.887374 2547 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:47:43.887766 kubelet[2547]: I1112 20:47:43.887698 2547 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:47:43.888489 kubelet[2547]: I1112 20:47:43.888466 2547 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 12 20:47:43.895313 kubelet[2547]: I1112 20:47:43.895293 2547 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 12 20:47:43.897685 kubelet[2547]: E1112 20:47:43.897657 2547 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" not found" Nov 12 20:47:43.904534 kubelet[2547]: I1112 20:47:43.902856 2547 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:47:43.904534 kubelet[2547]: I1112 20:47:43.902977 2547 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:47:43.907595 kubelet[2547]: I1112 20:47:43.906288 2547 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 12 20:47:43.914540 kubelet[2547]: I1112 20:47:43.909796 2547 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:47:43.915267 kubelet[2547]: I1112 20:47:43.907828 2547 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:47:43.915267 kubelet[2547]: I1112 20:47:43.907751 2547 server.go:460] "Adding debug handlers to kubelet server" Nov 12 20:47:43.931568 kubelet[2547]: I1112 20:47:43.931509 2547 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:47:43.937215 kubelet[2547]: I1112 20:47:43.937013 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:47:43.940790 kubelet[2547]: I1112 20:47:43.939662 2547 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:47:43.940790 kubelet[2547]: I1112 20:47:43.939707 2547 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:47:43.940790 kubelet[2547]: I1112 20:47:43.939737 2547 kubelet.go:2321] "Starting kubelet main sync loop" Nov 12 20:47:43.940790 kubelet[2547]: E1112 20:47:43.939894 2547 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:47:44.032349 kubelet[2547]: I1112 20:47:44.032221 2547 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:47:44.032539 kubelet[2547]: I1112 20:47:44.032518 2547 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:47:44.032719 kubelet[2547]: I1112 20:47:44.032699 2547 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:47:44.033203 kubelet[2547]: I1112 20:47:44.033175 2547 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:47:44.033384 kubelet[2547]: I1112 20:47:44.033317 2547 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:47:44.033490 kubelet[2547]: I1112 20:47:44.033478 2547 policy_none.go:49] "None policy: Start" Nov 12 20:47:44.034863 kubelet[2547]: I1112 20:47:44.034843 2547 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:47:44.035501 kubelet[2547]: I1112 20:47:44.035098 2547 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:47:44.035501 kubelet[2547]: I1112 20:47:44.035400 2547 state_mem.go:75] "Updated machine memory state" Nov 12 20:47:44.040262 kubelet[2547]: E1112 20:47:44.040232 2547 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:47:44.043925 kubelet[2547]: I1112 20:47:44.043891 2547 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:47:44.044634 kubelet[2547]: I1112 20:47:44.044615 2547 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 12 20:47:44.047482 kubelet[2547]: I1112 20:47:44.044783 2547 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:47:44.047482 kubelet[2547]: I1112 20:47:44.046943 2547 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:47:44.234356 kubelet[2547]: I1112 20:47:44.233706 2547 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.247274 kubelet[2547]: I1112 20:47:44.247220 2547 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.247960 kubelet[2547]: I1112 20:47:44.247605 2547 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.250964 kubelet[2547]: W1112 20:47:44.250929 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:47:44.253627 kubelet[2547]: W1112 20:47:44.253595 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:47:44.258070 kubelet[2547]: W1112 20:47:44.257720 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:47:44.316995 kubelet[2547]: I1112 20:47:44.316550 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.316995 kubelet[2547]: I1112 20:47:44.316634 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.316995 kubelet[2547]: I1112 20:47:44.316673 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/728b672648ee1edcc992e22f8d9d1413-kubeconfig\") pod \"kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"728b672648ee1edcc992e22f8d9d1413\") " pod="kube-system/kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.316995 kubelet[2547]: I1112 20:47:44.316704 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/01668730e6c520d4a8286fd3d69e9b64-ca-certs\") pod \"kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"01668730e6c520d4a8286fd3d69e9b64\") " pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.317404 kubelet[2547]: I1112 20:47:44.316756 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/01668730e6c520d4a8286fd3d69e9b64-k8s-certs\") pod \"kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"01668730e6c520d4a8286fd3d69e9b64\") " pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.317404 kubelet[2547]: I1112 20:47:44.316788 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/01668730e6c520d4a8286fd3d69e9b64-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"01668730e6c520d4a8286fd3d69e9b64\") " pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.317404 kubelet[2547]: I1112 20:47:44.316821 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-ca-certs\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.317404 kubelet[2547]: I1112 20:47:44.316851 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.317703 kubelet[2547]: I1112 20:47:44.316885 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29f5fa5060c045239bab032efa3a5fa6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" (UID: \"29f5fa5060c045239bab032efa3a5fa6\") " pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:44.871592 kubelet[2547]: I1112 20:47:44.870619 2547 apiserver.go:52] "Watching apiserver" Nov 12 20:47:44.910782 kubelet[2547]: I1112 20:47:44.910220 2547 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 12 20:47:45.011440 kubelet[2547]: W1112 20:47:45.009319 2547 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Nov 12 20:47:45.011440 kubelet[2547]: E1112 20:47:45.009410 2547 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:47:45.042323 kubelet[2547]: I1112 20:47:45.042237 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" podStartSLOduration=1.042187941 podStartE2EDuration="1.042187941s" podCreationTimestamp="2024-11-12 20:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:47:45.040301983 +0000 UTC m=+1.259694790" watchObservedRunningTime="2024-11-12 20:47:45.042187941 +0000 UTC m=+1.261580746" Nov 12 20:47:45.076585 kubelet[2547]: I1112 20:47:45.076498 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" podStartSLOduration=1.07646922 podStartE2EDuration="1.07646922s" podCreationTimestamp="2024-11-12 20:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:47:45.060250823 +0000 UTC m=+1.279643630" watchObservedRunningTime="2024-11-12 20:47:45.07646922 +0000 UTC m=+1.295862015" Nov 12 20:47:45.096470 kubelet[2547]: I1112 20:47:45.096351 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" podStartSLOduration=1.096326687 podStartE2EDuration="1.096326687s" podCreationTimestamp="2024-11-12 20:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:47:45.077786062 +0000 UTC m=+1.297178870" watchObservedRunningTime="2024-11-12 20:47:45.096326687 +0000 UTC m=+1.315719495" Nov 12 20:47:47.896469 update_engine[1457]: I20241112 20:47:47.896088 1457 update_attempter.cc:509] Updating boot flags... Nov 12 20:47:47.970594 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2615) Nov 12 20:47:48.113696 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2615) Nov 12 20:47:48.234651 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2615) Nov 12 20:47:49.871301 sudo[1711]: pam_unix(sudo:session): session closed for user root Nov 12 20:47:49.915787 sshd[1708]: pam_unix(sshd:session): session closed for user core Nov 12 20:47:49.928177 systemd[1]: sshd@6-10.128.0.68:22-139.178.89.65:47560.service: Deactivated successfully. Nov 12 20:47:49.933984 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:47:49.934235 systemd[1]: session-7.scope: Consumed 5.627s CPU time, 154.5M memory peak, 0B memory swap peak. Nov 12 20:47:49.937251 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:47:49.945221 systemd-logind[1454]: Removed session 7. Nov 12 20:47:50.123273 kubelet[2547]: I1112 20:47:50.123133 2547 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:47:50.124232 containerd[1464]: time="2024-11-12T20:47:50.123909995Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:47:50.124691 kubelet[2547]: I1112 20:47:50.124526 2547 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:47:50.840424 systemd[1]: Created slice kubepods-besteffort-pod2f5c96b3_85e1_486b_9980_21f566c68e6c.slice - libcontainer container kubepods-besteffort-pod2f5c96b3_85e1_486b_9980_21f566c68e6c.slice. Nov 12 20:47:50.870641 kubelet[2547]: I1112 20:47:50.870547 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wk8s\" (UniqueName: \"kubernetes.io/projected/2f5c96b3-85e1-486b-9980-21f566c68e6c-kube-api-access-9wk8s\") pod \"kube-proxy-5cfj6\" (UID: \"2f5c96b3-85e1-486b-9980-21f566c68e6c\") " pod="kube-system/kube-proxy-5cfj6" Nov 12 20:47:50.870840 kubelet[2547]: I1112 20:47:50.870657 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f5c96b3-85e1-486b-9980-21f566c68e6c-kube-proxy\") pod \"kube-proxy-5cfj6\" (UID: \"2f5c96b3-85e1-486b-9980-21f566c68e6c\") " pod="kube-system/kube-proxy-5cfj6" Nov 12 20:47:50.870840 kubelet[2547]: I1112 20:47:50.870685 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f5c96b3-85e1-486b-9980-21f566c68e6c-xtables-lock\") pod \"kube-proxy-5cfj6\" (UID: \"2f5c96b3-85e1-486b-9980-21f566c68e6c\") " pod="kube-system/kube-proxy-5cfj6" Nov 12 20:47:50.870840 kubelet[2547]: I1112 20:47:50.870707 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f5c96b3-85e1-486b-9980-21f566c68e6c-lib-modules\") pod \"kube-proxy-5cfj6\" (UID: \"2f5c96b3-85e1-486b-9980-21f566c68e6c\") " pod="kube-system/kube-proxy-5cfj6" Nov 12 20:47:51.155803 containerd[1464]: time="2024-11-12T20:47:51.154647392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5cfj6,Uid:2f5c96b3-85e1-486b-9980-21f566c68e6c,Namespace:kube-system,Attempt:0,}" Nov 12 20:47:51.164169 systemd[1]: Created slice kubepods-besteffort-pod769b1018_445f_4956_aa25_afa6f8c1c005.slice - libcontainer container kubepods-besteffort-pod769b1018_445f_4956_aa25_afa6f8c1c005.slice. Nov 12 20:47:51.173583 kubelet[2547]: I1112 20:47:51.171827 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdh5n\" (UniqueName: \"kubernetes.io/projected/769b1018-445f-4956-aa25-afa6f8c1c005-kube-api-access-kdh5n\") pod \"tigera-operator-f8bc97d4c-vslkz\" (UID: \"769b1018-445f-4956-aa25-afa6f8c1c005\") " pod="tigera-operator/tigera-operator-f8bc97d4c-vslkz" Nov 12 20:47:51.173583 kubelet[2547]: I1112 20:47:51.171892 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/769b1018-445f-4956-aa25-afa6f8c1c005-var-lib-calico\") pod \"tigera-operator-f8bc97d4c-vslkz\" (UID: \"769b1018-445f-4956-aa25-afa6f8c1c005\") " pod="tigera-operator/tigera-operator-f8bc97d4c-vslkz" Nov 12 20:47:51.202670 containerd[1464]: time="2024-11-12T20:47:51.201771546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:51.202670 containerd[1464]: time="2024-11-12T20:47:51.202011114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:51.202670 containerd[1464]: time="2024-11-12T20:47:51.202044368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:51.202670 containerd[1464]: time="2024-11-12T20:47:51.202182683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:51.237869 systemd[1]: Started cri-containerd-4ed0df5a3f7d144fbf8fc7325e4fcee1fb79e6d89b4eac32edb84b0d3bd7f6e1.scope - libcontainer container 4ed0df5a3f7d144fbf8fc7325e4fcee1fb79e6d89b4eac32edb84b0d3bd7f6e1. Nov 12 20:47:51.274593 containerd[1464]: time="2024-11-12T20:47:51.274397556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5cfj6,Uid:2f5c96b3-85e1-486b-9980-21f566c68e6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ed0df5a3f7d144fbf8fc7325e4fcee1fb79e6d89b4eac32edb84b0d3bd7f6e1\"" Nov 12 20:47:51.282696 containerd[1464]: time="2024-11-12T20:47:51.282619881Z" level=info msg="CreateContainer within sandbox \"4ed0df5a3f7d144fbf8fc7325e4fcee1fb79e6d89b4eac32edb84b0d3bd7f6e1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:47:51.319330 containerd[1464]: time="2024-11-12T20:47:51.319204488Z" level=info msg="CreateContainer within sandbox \"4ed0df5a3f7d144fbf8fc7325e4fcee1fb79e6d89b4eac32edb84b0d3bd7f6e1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f648041b9b405238afebaaf07786dc6a9a9d9a9538b9c05092183d017fff3ef3\"" Nov 12 20:47:51.320161 containerd[1464]: time="2024-11-12T20:47:51.320118310Z" level=info msg="StartContainer for \"f648041b9b405238afebaaf07786dc6a9a9d9a9538b9c05092183d017fff3ef3\"" Nov 12 20:47:51.362906 systemd[1]: Started cri-containerd-f648041b9b405238afebaaf07786dc6a9a9d9a9538b9c05092183d017fff3ef3.scope - libcontainer container f648041b9b405238afebaaf07786dc6a9a9d9a9538b9c05092183d017fff3ef3. Nov 12 20:47:51.403157 containerd[1464]: time="2024-11-12T20:47:51.402981501Z" level=info msg="StartContainer for \"f648041b9b405238afebaaf07786dc6a9a9d9a9538b9c05092183d017fff3ef3\" returns successfully" Nov 12 20:47:51.471684 containerd[1464]: time="2024-11-12T20:47:51.471032288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-vslkz,Uid:769b1018-445f-4956-aa25-afa6f8c1c005,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:47:51.512492 containerd[1464]: time="2024-11-12T20:47:51.512362153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:51.512492 containerd[1464]: time="2024-11-12T20:47:51.512425714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:51.512492 containerd[1464]: time="2024-11-12T20:47:51.512444010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:51.512893 containerd[1464]: time="2024-11-12T20:47:51.512770378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:51.545087 systemd[1]: Started cri-containerd-5de5b3ae27436f882ed148ab5700b55750e6b685982fb913ce5e05620b74e9a9.scope - libcontainer container 5de5b3ae27436f882ed148ab5700b55750e6b685982fb913ce5e05620b74e9a9. Nov 12 20:47:51.630393 containerd[1464]: time="2024-11-12T20:47:51.630323050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-f8bc97d4c-vslkz,Uid:769b1018-445f-4956-aa25-afa6f8c1c005,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5de5b3ae27436f882ed148ab5700b55750e6b685982fb913ce5e05620b74e9a9\"" Nov 12 20:47:51.634057 containerd[1464]: time="2024-11-12T20:47:51.634013994Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:47:53.782072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732795061.mount: Deactivated successfully. Nov 12 20:47:53.957483 kubelet[2547]: I1112 20:47:53.957084 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5cfj6" podStartSLOduration=3.957060053 podStartE2EDuration="3.957060053s" podCreationTimestamp="2024-11-12 20:47:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:47:52.037251425 +0000 UTC m=+8.256644234" watchObservedRunningTime="2024-11-12 20:47:53.957060053 +0000 UTC m=+10.176452863" Nov 12 20:47:54.992856 containerd[1464]: time="2024-11-12T20:47:54.992785850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:54.994277 containerd[1464]: time="2024-11-12T20:47:54.994204448Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763371" Nov 12 20:47:54.996423 containerd[1464]: time="2024-11-12T20:47:54.996352419Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:55.000927 containerd[1464]: time="2024-11-12T20:47:55.000779956Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:47:55.003836 containerd[1464]: time="2024-11-12T20:47:55.003647695Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 3.369583145s" Nov 12 20:47:55.003836 containerd[1464]: time="2024-11-12T20:47:55.003703190Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:47:55.007952 containerd[1464]: time="2024-11-12T20:47:55.007402215Z" level=info msg="CreateContainer within sandbox \"5de5b3ae27436f882ed148ab5700b55750e6b685982fb913ce5e05620b74e9a9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:47:55.026968 containerd[1464]: time="2024-11-12T20:47:55.026890403Z" level=info msg="CreateContainer within sandbox \"5de5b3ae27436f882ed148ab5700b55750e6b685982fb913ce5e05620b74e9a9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"458b357ba820e4c5f4d2b0442db057773d3d07f102596fc9a514d67d02621493\"" Nov 12 20:47:55.029754 containerd[1464]: time="2024-11-12T20:47:55.028492873Z" level=info msg="StartContainer for \"458b357ba820e4c5f4d2b0442db057773d3d07f102596fc9a514d67d02621493\"" Nov 12 20:47:55.084699 systemd[1]: run-containerd-runc-k8s.io-458b357ba820e4c5f4d2b0442db057773d3d07f102596fc9a514d67d02621493-runc.zjjDvu.mount: Deactivated successfully. Nov 12 20:47:55.096917 systemd[1]: Started cri-containerd-458b357ba820e4c5f4d2b0442db057773d3d07f102596fc9a514d67d02621493.scope - libcontainer container 458b357ba820e4c5f4d2b0442db057773d3d07f102596fc9a514d67d02621493. Nov 12 20:47:55.140929 containerd[1464]: time="2024-11-12T20:47:55.140807951Z" level=info msg="StartContainer for \"458b357ba820e4c5f4d2b0442db057773d3d07f102596fc9a514d67d02621493\" returns successfully" Nov 12 20:47:58.493512 kubelet[2547]: I1112 20:47:58.493413 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-f8bc97d4c-vslkz" podStartSLOduration=4.120879 podStartE2EDuration="7.493383487s" podCreationTimestamp="2024-11-12 20:47:51 +0000 UTC" firstStartedPulling="2024-11-12 20:47:51.632106171 +0000 UTC m=+7.851498957" lastFinishedPulling="2024-11-12 20:47:55.004610646 +0000 UTC m=+11.224003444" observedRunningTime="2024-11-12 20:47:56.056456669 +0000 UTC m=+12.275849476" watchObservedRunningTime="2024-11-12 20:47:58.493383487 +0000 UTC m=+14.712776300" Nov 12 20:47:58.508004 systemd[1]: Created slice kubepods-besteffort-pod61cecc11_ff12_43f4_94f1_08229736bc17.slice - libcontainer container kubepods-besteffort-pod61cecc11_ff12_43f4_94f1_08229736bc17.slice. Nov 12 20:47:58.520878 kubelet[2547]: I1112 20:47:58.520655 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/61cecc11-ff12-43f4-94f1-08229736bc17-typha-certs\") pod \"calico-typha-6bd8fb57d5-9gblj\" (UID: \"61cecc11-ff12-43f4-94f1-08229736bc17\") " pod="calico-system/calico-typha-6bd8fb57d5-9gblj" Nov 12 20:47:58.520878 kubelet[2547]: I1112 20:47:58.520788 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fdjj\" (UniqueName: \"kubernetes.io/projected/61cecc11-ff12-43f4-94f1-08229736bc17-kube-api-access-6fdjj\") pod \"calico-typha-6bd8fb57d5-9gblj\" (UID: \"61cecc11-ff12-43f4-94f1-08229736bc17\") " pod="calico-system/calico-typha-6bd8fb57d5-9gblj" Nov 12 20:47:58.520878 kubelet[2547]: I1112 20:47:58.520828 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61cecc11-ff12-43f4-94f1-08229736bc17-tigera-ca-bundle\") pod \"calico-typha-6bd8fb57d5-9gblj\" (UID: \"61cecc11-ff12-43f4-94f1-08229736bc17\") " pod="calico-system/calico-typha-6bd8fb57d5-9gblj" Nov 12 20:47:58.706453 systemd[1]: Created slice kubepods-besteffort-pode3d43da4_d9f5_4cf6_9a21_7a9e4f2ca61b.slice - libcontainer container kubepods-besteffort-pode3d43da4_d9f5_4cf6_9a21_7a9e4f2ca61b.slice. Nov 12 20:47:58.723589 kubelet[2547]: I1112 20:47:58.722636 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-xtables-lock\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723589 kubelet[2547]: I1112 20:47:58.722694 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-var-lib-calico\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723589 kubelet[2547]: I1112 20:47:58.722723 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-cni-log-dir\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723589 kubelet[2547]: I1112 20:47:58.722752 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-tigera-ca-bundle\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723589 kubelet[2547]: I1112 20:47:58.722779 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-lib-modules\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723986 kubelet[2547]: I1112 20:47:58.722806 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-policysync\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723986 kubelet[2547]: I1112 20:47:58.722836 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-flexvol-driver-host\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723986 kubelet[2547]: I1112 20:47:58.722867 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9fpj\" (UniqueName: \"kubernetes.io/projected/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-kube-api-access-b9fpj\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723986 kubelet[2547]: I1112 20:47:58.722894 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-var-run-calico\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.723986 kubelet[2547]: I1112 20:47:58.722961 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-node-certs\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.724244 kubelet[2547]: I1112 20:47:58.722991 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-cni-bin-dir\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.724244 kubelet[2547]: I1112 20:47:58.723026 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b-cni-net-dir\") pod \"calico-node-5fr48\" (UID: \"e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b\") " pod="calico-system/calico-node-5fr48" Nov 12 20:47:58.816146 containerd[1464]: time="2024-11-12T20:47:58.816087811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd8fb57d5-9gblj,Uid:61cecc11-ff12-43f4-94f1-08229736bc17,Namespace:calico-system,Attempt:0,}" Nov 12 20:47:58.831535 kubelet[2547]: E1112 20:47:58.830733 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4qsw" podUID="135ae93f-39f0-4df8-85fb-bb23f14dc7a4" Nov 12 20:47:58.866013 kubelet[2547]: E1112 20:47:58.865976 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.866272 kubelet[2547]: W1112 20:47:58.866243 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.872627 kubelet[2547]: E1112 20:47:58.872590 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.875042 kubelet[2547]: E1112 20:47:58.874910 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.888811 kubelet[2547]: W1112 20:47:58.888759 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.892401 kubelet[2547]: E1112 20:47:58.891654 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.910439 kubelet[2547]: E1112 20:47:58.910384 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.910685 kubelet[2547]: W1112 20:47:58.910659 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.912552 kubelet[2547]: E1112 20:47:58.912508 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.915499 kubelet[2547]: E1112 20:47:58.915233 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.916042 kubelet[2547]: W1112 20:47:58.915769 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.916042 kubelet[2547]: E1112 20:47:58.915804 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.917133 kubelet[2547]: E1112 20:47:58.917063 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.917133 kubelet[2547]: W1112 20:47:58.917085 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.918025 kubelet[2547]: E1112 20:47:58.917281 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.919545 kubelet[2547]: E1112 20:47:58.919332 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.919545 kubelet[2547]: W1112 20:47:58.919468 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.919545 kubelet[2547]: E1112 20:47:58.919494 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.920761 kubelet[2547]: E1112 20:47:58.920458 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.920761 kubelet[2547]: W1112 20:47:58.920481 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.920761 kubelet[2547]: E1112 20:47:58.920501 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.922370 kubelet[2547]: E1112 20:47:58.921965 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.922370 kubelet[2547]: W1112 20:47:58.921983 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.922370 kubelet[2547]: E1112 20:47:58.922001 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.924913 kubelet[2547]: E1112 20:47:58.924840 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.924913 kubelet[2547]: W1112 20:47:58.924862 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.924913 kubelet[2547]: E1112 20:47:58.924880 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.927209 containerd[1464]: time="2024-11-12T20:47:58.924967527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:58.927209 containerd[1464]: time="2024-11-12T20:47:58.925056577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:58.927209 containerd[1464]: time="2024-11-12T20:47:58.925116618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:58.927209 containerd[1464]: time="2024-11-12T20:47:58.925256420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:58.927532 kubelet[2547]: E1112 20:47:58.926751 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.927532 kubelet[2547]: W1112 20:47:58.927145 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.927532 kubelet[2547]: E1112 20:47:58.927178 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.931623 kubelet[2547]: E1112 20:47:58.930372 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.931623 kubelet[2547]: W1112 20:47:58.930393 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.931623 kubelet[2547]: E1112 20:47:58.930411 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.936155 kubelet[2547]: E1112 20:47:58.934427 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.936155 kubelet[2547]: W1112 20:47:58.934456 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.936155 kubelet[2547]: E1112 20:47:58.934479 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.936155 kubelet[2547]: E1112 20:47:58.935210 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.936155 kubelet[2547]: W1112 20:47:58.935225 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.936155 kubelet[2547]: E1112 20:47:58.935281 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.936155 kubelet[2547]: E1112 20:47:58.935769 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.936155 kubelet[2547]: W1112 20:47:58.935785 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.936155 kubelet[2547]: E1112 20:47:58.935805 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.939439 kubelet[2547]: E1112 20:47:58.937249 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.939439 kubelet[2547]: W1112 20:47:58.937268 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.939439 kubelet[2547]: E1112 20:47:58.937288 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.939439 kubelet[2547]: E1112 20:47:58.939201 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.939439 kubelet[2547]: W1112 20:47:58.939220 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.939439 kubelet[2547]: E1112 20:47:58.939238 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.940234 kubelet[2547]: E1112 20:47:58.940214 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.940538 kubelet[2547]: W1112 20:47:58.940343 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.940538 kubelet[2547]: E1112 20:47:58.940420 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.940934 kubelet[2547]: E1112 20:47:58.940916 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.941208 kubelet[2547]: W1112 20:47:58.941029 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.941208 kubelet[2547]: E1112 20:47:58.941053 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.941930 kubelet[2547]: E1112 20:47:58.941900 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.942212 kubelet[2547]: W1112 20:47:58.942145 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.942337 kubelet[2547]: E1112 20:47:58.942318 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.945011 kubelet[2547]: E1112 20:47:58.943536 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.945011 kubelet[2547]: W1112 20:47:58.943588 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.945011 kubelet[2547]: E1112 20:47:58.943614 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.945840 kubelet[2547]: E1112 20:47:58.945691 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.945840 kubelet[2547]: W1112 20:47:58.945710 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.945840 kubelet[2547]: E1112 20:47:58.945727 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.946631 kubelet[2547]: E1112 20:47:58.946598 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.946777 kubelet[2547]: W1112 20:47:58.946759 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.946917 kubelet[2547]: E1112 20:47:58.946899 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.948736 kubelet[2547]: E1112 20:47:58.948405 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.948736 kubelet[2547]: W1112 20:47:58.948439 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.948736 kubelet[2547]: E1112 20:47:58.948458 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.951663 kubelet[2547]: E1112 20:47:58.950800 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.951663 kubelet[2547]: W1112 20:47:58.950854 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.951663 kubelet[2547]: E1112 20:47:58.950878 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.951663 kubelet[2547]: I1112 20:47:58.950979 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/135ae93f-39f0-4df8-85fb-bb23f14dc7a4-varrun\") pod \"csi-node-driver-j4qsw\" (UID: \"135ae93f-39f0-4df8-85fb-bb23f14dc7a4\") " pod="calico-system/csi-node-driver-j4qsw" Nov 12 20:47:58.951663 kubelet[2547]: E1112 20:47:58.951523 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.951663 kubelet[2547]: W1112 20:47:58.951538 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.951663 kubelet[2547]: E1112 20:47:58.951608 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.952574 kubelet[2547]: E1112 20:47:58.952514 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.952574 kubelet[2547]: W1112 20:47:58.952534 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.953362 kubelet[2547]: E1112 20:47:58.953005 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.954017 kubelet[2547]: E1112 20:47:58.953801 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.954017 kubelet[2547]: W1112 20:47:58.953824 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.954017 kubelet[2547]: E1112 20:47:58.953844 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.954017 kubelet[2547]: I1112 20:47:58.953972 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/135ae93f-39f0-4df8-85fb-bb23f14dc7a4-socket-dir\") pod \"csi-node-driver-j4qsw\" (UID: \"135ae93f-39f0-4df8-85fb-bb23f14dc7a4\") " pod="calico-system/csi-node-driver-j4qsw" Nov 12 20:47:58.955393 kubelet[2547]: E1112 20:47:58.955041 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.955393 kubelet[2547]: W1112 20:47:58.955061 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.955393 kubelet[2547]: E1112 20:47:58.955118 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.956593 kubelet[2547]: E1112 20:47:58.956437 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.956593 kubelet[2547]: W1112 20:47:58.956455 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.957148 kubelet[2547]: E1112 20:47:58.956809 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.959921 kubelet[2547]: E1112 20:47:58.959657 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.959921 kubelet[2547]: W1112 20:47:58.959679 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.959921 kubelet[2547]: E1112 20:47:58.959699 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.959921 kubelet[2547]: I1112 20:47:58.959740 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/135ae93f-39f0-4df8-85fb-bb23f14dc7a4-registration-dir\") pod \"csi-node-driver-j4qsw\" (UID: \"135ae93f-39f0-4df8-85fb-bb23f14dc7a4\") " pod="calico-system/csi-node-driver-j4qsw" Nov 12 20:47:58.960454 kubelet[2547]: E1112 20:47:58.960289 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.960454 kubelet[2547]: W1112 20:47:58.960313 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.960454 kubelet[2547]: E1112 20:47:58.960345 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.960454 kubelet[2547]: I1112 20:47:58.960375 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/135ae93f-39f0-4df8-85fb-bb23f14dc7a4-kubelet-dir\") pod \"csi-node-driver-j4qsw\" (UID: \"135ae93f-39f0-4df8-85fb-bb23f14dc7a4\") " pod="calico-system/csi-node-driver-j4qsw" Nov 12 20:47:58.961771 kubelet[2547]: E1112 20:47:58.961308 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.961771 kubelet[2547]: W1112 20:47:58.961329 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.961771 kubelet[2547]: E1112 20:47:58.961370 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.961771 kubelet[2547]: I1112 20:47:58.961400 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nkxd\" (UniqueName: \"kubernetes.io/projected/135ae93f-39f0-4df8-85fb-bb23f14dc7a4-kube-api-access-2nkxd\") pod \"csi-node-driver-j4qsw\" (UID: \"135ae93f-39f0-4df8-85fb-bb23f14dc7a4\") " pod="calico-system/csi-node-driver-j4qsw" Nov 12 20:47:58.962918 kubelet[2547]: E1112 20:47:58.962401 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.962918 kubelet[2547]: W1112 20:47:58.962530 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.962918 kubelet[2547]: E1112 20:47:58.962551 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.964144 kubelet[2547]: E1112 20:47:58.963840 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.964144 kubelet[2547]: W1112 20:47:58.963887 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.964144 kubelet[2547]: E1112 20:47:58.963955 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.965531 kubelet[2547]: E1112 20:47:58.965250 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.965531 kubelet[2547]: W1112 20:47:58.965269 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.965531 kubelet[2547]: E1112 20:47:58.965497 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.966620 kubelet[2547]: E1112 20:47:58.966176 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.966620 kubelet[2547]: W1112 20:47:58.966197 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.966620 kubelet[2547]: E1112 20:47:58.966220 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.967352 kubelet[2547]: E1112 20:47:58.966966 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.967352 kubelet[2547]: W1112 20:47:58.966984 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.967352 kubelet[2547]: E1112 20:47:58.967028 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.968734 kubelet[2547]: E1112 20:47:58.968345 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:58.968734 kubelet[2547]: W1112 20:47:58.968377 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:58.968734 kubelet[2547]: E1112 20:47:58.968395 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:58.994879 systemd[1]: Started cri-containerd-0fb36c9839d02fa1969d14b0a68a7956b2418f620b55a2a9adf662658f530a14.scope - libcontainer container 0fb36c9839d02fa1969d14b0a68a7956b2418f620b55a2a9adf662658f530a14. Nov 12 20:47:59.014124 containerd[1464]: time="2024-11-12T20:47:59.013280811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fr48,Uid:e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b,Namespace:calico-system,Attempt:0,}" Nov 12 20:47:59.065890 kubelet[2547]: E1112 20:47:59.065790 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.068816 kubelet[2547]: W1112 20:47:59.066500 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.068816 kubelet[2547]: E1112 20:47:59.066606 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.070293 kubelet[2547]: E1112 20:47:59.069847 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.070293 kubelet[2547]: W1112 20:47:59.069878 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.070293 kubelet[2547]: E1112 20:47:59.069933 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.075193 kubelet[2547]: E1112 20:47:59.073546 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.075193 kubelet[2547]: W1112 20:47:59.073834 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.075193 kubelet[2547]: E1112 20:47:59.074276 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.078061 kubelet[2547]: E1112 20:47:59.075814 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.078061 kubelet[2547]: W1112 20:47:59.075839 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.078061 kubelet[2547]: E1112 20:47:59.075886 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.079235 kubelet[2547]: E1112 20:47:59.079070 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.081044 kubelet[2547]: W1112 20:47:59.080666 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.082399 kubelet[2547]: E1112 20:47:59.081504 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.085987 kubelet[2547]: E1112 20:47:59.085196 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.085987 kubelet[2547]: W1112 20:47:59.085237 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.085987 kubelet[2547]: E1112 20:47:59.085911 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.087437 kubelet[2547]: E1112 20:47:59.086869 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.087437 kubelet[2547]: W1112 20:47:59.086892 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.087437 kubelet[2547]: E1112 20:47:59.086998 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.088083 kubelet[2547]: E1112 20:47:59.087896 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.088083 kubelet[2547]: W1112 20:47:59.087919 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.088579 kubelet[2547]: E1112 20:47:59.088375 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.089260 kubelet[2547]: E1112 20:47:59.089006 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.089260 kubelet[2547]: W1112 20:47:59.089067 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.089663 kubelet[2547]: E1112 20:47:59.089610 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.090258 kubelet[2547]: E1112 20:47:59.090108 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.090258 kubelet[2547]: W1112 20:47:59.090127 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.091115 kubelet[2547]: E1112 20:47:59.090549 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.091358 kubelet[2547]: E1112 20:47:59.091341 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.091490 kubelet[2547]: W1112 20:47:59.091469 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.091909 kubelet[2547]: E1112 20:47:59.091746 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.092120 kubelet[2547]: E1112 20:47:59.092104 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.092296 kubelet[2547]: W1112 20:47:59.092229 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.092502 kubelet[2547]: E1112 20:47:59.092408 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.093232 kubelet[2547]: E1112 20:47:59.093101 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.093232 kubelet[2547]: W1112 20:47:59.093120 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.093631 kubelet[2547]: E1112 20:47:59.093526 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.093791 kubelet[2547]: E1112 20:47:59.093744 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.093791 kubelet[2547]: W1112 20:47:59.093761 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.094109 kubelet[2547]: E1112 20:47:59.094075 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.094509 kubelet[2547]: E1112 20:47:59.094402 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.094509 kubelet[2547]: W1112 20:47:59.094420 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.094801 kubelet[2547]: E1112 20:47:59.094681 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.095100 kubelet[2547]: E1112 20:47:59.095082 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.095364 kubelet[2547]: W1112 20:47:59.095223 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.095570 kubelet[2547]: E1112 20:47:59.095497 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.096417 kubelet[2547]: E1112 20:47:59.096242 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.096417 kubelet[2547]: W1112 20:47:59.096264 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.096792 kubelet[2547]: E1112 20:47:59.096617 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.096986 kubelet[2547]: E1112 20:47:59.096970 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.097087 kubelet[2547]: W1112 20:47:59.097070 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.097427 kubelet[2547]: E1112 20:47:59.097349 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.098067 kubelet[2547]: E1112 20:47:59.097905 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.098067 kubelet[2547]: W1112 20:47:59.097924 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.098323 kubelet[2547]: E1112 20:47:59.098246 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.098867 kubelet[2547]: E1112 20:47:59.098656 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.098867 kubelet[2547]: W1112 20:47:59.098673 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.099323 kubelet[2547]: E1112 20:47:59.099130 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.099859 kubelet[2547]: E1112 20:47:59.099683 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.099859 kubelet[2547]: W1112 20:47:59.099700 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.100158 kubelet[2547]: E1112 20:47:59.100096 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.100710 kubelet[2547]: E1112 20:47:59.100579 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.100710 kubelet[2547]: W1112 20:47:59.100597 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.101174 kubelet[2547]: E1112 20:47:59.100965 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.101445 kubelet[2547]: E1112 20:47:59.101423 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.101971 kubelet[2547]: W1112 20:47:59.101547 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.102280 kubelet[2547]: E1112 20:47:59.102235 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.103040 kubelet[2547]: E1112 20:47:59.102907 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.103040 kubelet[2547]: W1112 20:47:59.102925 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.103419 kubelet[2547]: E1112 20:47:59.103225 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.105162 kubelet[2547]: E1112 20:47:59.104600 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.105162 kubelet[2547]: W1112 20:47:59.104620 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.105162 kubelet[2547]: E1112 20:47:59.104637 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.115820 kubelet[2547]: E1112 20:47:59.114742 2547 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:47:59.116090 kubelet[2547]: W1112 20:47:59.116002 2547 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:47:59.116090 kubelet[2547]: E1112 20:47:59.116041 2547 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:47:59.120544 containerd[1464]: time="2024-11-12T20:47:59.119187388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:47:59.121010 containerd[1464]: time="2024-11-12T20:47:59.120907702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:47:59.121010 containerd[1464]: time="2024-11-12T20:47:59.120948609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:59.121862 containerd[1464]: time="2024-11-12T20:47:59.121776685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:47:59.152263 systemd[1]: Started cri-containerd-49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7.scope - libcontainer container 49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7. Nov 12 20:47:59.252993 containerd[1464]: time="2024-11-12T20:47:59.252861724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5fr48,Uid:e3d43da4-d9f5-4cf6-9a21-7a9e4f2ca61b,Namespace:calico-system,Attempt:0,} returns sandbox id \"49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7\"" Nov 12 20:47:59.259544 containerd[1464]: time="2024-11-12T20:47:59.258395607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd8fb57d5-9gblj,Uid:61cecc11-ff12-43f4-94f1-08229736bc17,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fb36c9839d02fa1969d14b0a68a7956b2418f620b55a2a9adf662658f530a14\"" Nov 12 20:47:59.261693 containerd[1464]: time="2024-11-12T20:47:59.260498896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:48:00.286906 containerd[1464]: time="2024-11-12T20:48:00.286825164Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:00.288537 containerd[1464]: time="2024-11-12T20:48:00.288282148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:48:00.291461 containerd[1464]: time="2024-11-12T20:48:00.289888377Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:00.294680 containerd[1464]: time="2024-11-12T20:48:00.294638377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:00.296206 containerd[1464]: time="2024-11-12T20:48:00.295598341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.035049243s" Nov 12 20:48:00.296911 containerd[1464]: time="2024-11-12T20:48:00.296859415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:48:00.300733 containerd[1464]: time="2024-11-12T20:48:00.300669539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:48:00.302500 containerd[1464]: time="2024-11-12T20:48:00.302447109Z" level=info msg="CreateContainer within sandbox \"49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:48:00.328760 containerd[1464]: time="2024-11-12T20:48:00.328697157Z" level=info msg="CreateContainer within sandbox \"49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5\"" Nov 12 20:48:00.331058 containerd[1464]: time="2024-11-12T20:48:00.329494463Z" level=info msg="StartContainer for \"0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5\"" Nov 12 20:48:00.386947 systemd[1]: run-containerd-runc-k8s.io-0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5-runc.9waeK7.mount: Deactivated successfully. Nov 12 20:48:00.396928 systemd[1]: Started cri-containerd-0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5.scope - libcontainer container 0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5. Nov 12 20:48:00.455487 containerd[1464]: time="2024-11-12T20:48:00.455423272Z" level=info msg="StartContainer for \"0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5\" returns successfully" Nov 12 20:48:00.472999 systemd[1]: cri-containerd-0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5.scope: Deactivated successfully. Nov 12 20:48:00.635792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5-rootfs.mount: Deactivated successfully. Nov 12 20:48:00.809218 containerd[1464]: time="2024-11-12T20:48:00.809087929Z" level=info msg="shim disconnected" id=0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5 namespace=k8s.io Nov 12 20:48:00.809218 containerd[1464]: time="2024-11-12T20:48:00.809177502Z" level=warning msg="cleaning up after shim disconnected" id=0971147bbd89eadfabc97a5fe673e75d80ee8c7ef15829f62fe7bd168bb099c5 namespace=k8s.io Nov 12 20:48:00.809218 containerd[1464]: time="2024-11-12T20:48:00.809193192Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:48:00.941668 kubelet[2547]: E1112 20:48:00.941081 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4qsw" podUID="135ae93f-39f0-4df8-85fb-bb23f14dc7a4" Nov 12 20:48:02.163006 containerd[1464]: time="2024-11-12T20:48:02.162919984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:02.165042 containerd[1464]: time="2024-11-12T20:48:02.164953939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:48:02.166675 containerd[1464]: time="2024-11-12T20:48:02.166595465Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:02.172223 containerd[1464]: time="2024-11-12T20:48:02.172142130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:02.173733 containerd[1464]: time="2024-11-12T20:48:02.172993804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 1.872248641s" Nov 12 20:48:02.173733 containerd[1464]: time="2024-11-12T20:48:02.173042734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:48:02.174692 containerd[1464]: time="2024-11-12T20:48:02.174659802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:48:02.197042 containerd[1464]: time="2024-11-12T20:48:02.196995079Z" level=info msg="CreateContainer within sandbox \"0fb36c9839d02fa1969d14b0a68a7956b2418f620b55a2a9adf662658f530a14\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:48:02.220174 containerd[1464]: time="2024-11-12T20:48:02.220109877Z" level=info msg="CreateContainer within sandbox \"0fb36c9839d02fa1969d14b0a68a7956b2418f620b55a2a9adf662658f530a14\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4479adfd2e6cc9faec2bce7e9a8eeb970595be5ec7536623d2920540f91ca962\"" Nov 12 20:48:02.222265 containerd[1464]: time="2024-11-12T20:48:02.220799235Z" level=info msg="StartContainer for \"4479adfd2e6cc9faec2bce7e9a8eeb970595be5ec7536623d2920540f91ca962\"" Nov 12 20:48:02.283864 systemd[1]: Started cri-containerd-4479adfd2e6cc9faec2bce7e9a8eeb970595be5ec7536623d2920540f91ca962.scope - libcontainer container 4479adfd2e6cc9faec2bce7e9a8eeb970595be5ec7536623d2920540f91ca962. Nov 12 20:48:02.348184 containerd[1464]: time="2024-11-12T20:48:02.348122054Z" level=info msg="StartContainer for \"4479adfd2e6cc9faec2bce7e9a8eeb970595be5ec7536623d2920540f91ca962\" returns successfully" Nov 12 20:48:02.941182 kubelet[2547]: E1112 20:48:02.941030 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4qsw" podUID="135ae93f-39f0-4df8-85fb-bb23f14dc7a4" Nov 12 20:48:03.085858 kubelet[2547]: I1112 20:48:03.084660 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bd8fb57d5-9gblj" podStartSLOduration=2.175982646 podStartE2EDuration="5.084635102s" podCreationTimestamp="2024-11-12 20:47:58 +0000 UTC" firstStartedPulling="2024-11-12 20:47:59.265726882 +0000 UTC m=+15.485119668" lastFinishedPulling="2024-11-12 20:48:02.174379326 +0000 UTC m=+18.393772124" observedRunningTime="2024-11-12 20:48:03.082107 +0000 UTC m=+19.301499809" watchObservedRunningTime="2024-11-12 20:48:03.084635102 +0000 UTC m=+19.304027908" Nov 12 20:48:04.064137 kubelet[2547]: I1112 20:48:04.064086 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:48:04.941133 kubelet[2547]: E1112 20:48:04.941056 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4qsw" podUID="135ae93f-39f0-4df8-85fb-bb23f14dc7a4" Nov 12 20:48:06.477640 containerd[1464]: time="2024-11-12T20:48:06.477568799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:06.479206 containerd[1464]: time="2024-11-12T20:48:06.478923245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:48:06.480817 containerd[1464]: time="2024-11-12T20:48:06.480746132Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:06.485149 containerd[1464]: time="2024-11-12T20:48:06.485084889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:06.486820 containerd[1464]: time="2024-11-12T20:48:06.485893049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 4.310787282s" Nov 12 20:48:06.486820 containerd[1464]: time="2024-11-12T20:48:06.485940248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:48:06.490692 containerd[1464]: time="2024-11-12T20:48:06.490651460Z" level=info msg="CreateContainer within sandbox \"49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:48:06.512329 containerd[1464]: time="2024-11-12T20:48:06.512268914Z" level=info msg="CreateContainer within sandbox \"49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd\"" Nov 12 20:48:06.514616 containerd[1464]: time="2024-11-12T20:48:06.513027315Z" level=info msg="StartContainer for \"e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd\"" Nov 12 20:48:06.563810 systemd[1]: Started cri-containerd-e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd.scope - libcontainer container e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd. Nov 12 20:48:06.607326 containerd[1464]: time="2024-11-12T20:48:06.607266139Z" level=info msg="StartContainer for \"e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd\" returns successfully" Nov 12 20:48:06.942236 kubelet[2547]: E1112 20:48:06.941177 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j4qsw" podUID="135ae93f-39f0-4df8-85fb-bb23f14dc7a4" Nov 12 20:48:07.680621 containerd[1464]: time="2024-11-12T20:48:07.680539834Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:48:07.688820 systemd[1]: cri-containerd-e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd.scope: Deactivated successfully. Nov 12 20:48:07.738959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd-rootfs.mount: Deactivated successfully. Nov 12 20:48:07.789881 kubelet[2547]: I1112 20:48:07.788358 2547 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Nov 12 20:48:07.843008 systemd[1]: Created slice kubepods-burstable-podf64531b6_17e8_4c48_9009_e59e3b3fc041.slice - libcontainer container kubepods-burstable-podf64531b6_17e8_4c48_9009_e59e3b3fc041.slice. Nov 12 20:48:07.860265 kubelet[2547]: I1112 20:48:07.860202 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bknbw\" (UniqueName: \"kubernetes.io/projected/f64531b6-17e8-4c48-9009-e59e3b3fc041-kube-api-access-bknbw\") pod \"coredns-6f6b679f8f-fzbmz\" (UID: \"f64531b6-17e8-4c48-9009-e59e3b3fc041\") " pod="kube-system/coredns-6f6b679f8f-fzbmz" Nov 12 20:48:07.860456 kubelet[2547]: I1112 20:48:07.860285 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f64531b6-17e8-4c48-9009-e59e3b3fc041-config-volume\") pod \"coredns-6f6b679f8f-fzbmz\" (UID: \"f64531b6-17e8-4c48-9009-e59e3b3fc041\") " pod="kube-system/coredns-6f6b679f8f-fzbmz" Nov 12 20:48:07.868053 systemd[1]: Created slice kubepods-burstable-poddcce075f_b91c_4537_baf7_afddd002397a.slice - libcontainer container kubepods-burstable-poddcce075f_b91c_4537_baf7_afddd002397a.slice. Nov 12 20:48:07.892876 systemd[1]: Created slice kubepods-besteffort-pod21429838_1e78_4092_ae72_36d1f86e0ea6.slice - libcontainer container kubepods-besteffort-pod21429838_1e78_4092_ae72_36d1f86e0ea6.slice. Nov 12 20:48:07.904800 systemd[1]: Created slice kubepods-besteffort-poda33ec532_6859_4a0f_a2fd_edeb7f28ebcb.slice - libcontainer container kubepods-besteffort-poda33ec532_6859_4a0f_a2fd_edeb7f28ebcb.slice. Nov 12 20:48:07.913748 systemd[1]: Created slice kubepods-besteffort-pod52481471_f033_4cb1_b92c_d14ac3414abb.slice - libcontainer container kubepods-besteffort-pod52481471_f033_4cb1_b92c_d14ac3414abb.slice. Nov 12 20:48:07.962586 kubelet[2547]: I1112 20:48:07.961015 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dcce075f-b91c-4537-baf7-afddd002397a-config-volume\") pod \"coredns-6f6b679f8f-jn6hw\" (UID: \"dcce075f-b91c-4537-baf7-afddd002397a\") " pod="kube-system/coredns-6f6b679f8f-jn6hw" Nov 12 20:48:07.962586 kubelet[2547]: I1112 20:48:07.961078 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/21429838-1e78-4092-ae72-36d1f86e0ea6-calico-apiserver-certs\") pod \"calico-apiserver-65945754fc-qf8qk\" (UID: \"21429838-1e78-4092-ae72-36d1f86e0ea6\") " pod="calico-apiserver/calico-apiserver-65945754fc-qf8qk" Nov 12 20:48:07.962586 kubelet[2547]: I1112 20:48:07.961107 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmgv\" (UniqueName: \"kubernetes.io/projected/a33ec532-6859-4a0f-a2fd-edeb7f28ebcb-kube-api-access-8qmgv\") pod \"calico-kube-controllers-656f8cbb56-jj5lc\" (UID: \"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb\") " pod="calico-system/calico-kube-controllers-656f8cbb56-jj5lc" Nov 12 20:48:07.962586 kubelet[2547]: I1112 20:48:07.961162 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a33ec532-6859-4a0f-a2fd-edeb7f28ebcb-tigera-ca-bundle\") pod \"calico-kube-controllers-656f8cbb56-jj5lc\" (UID: \"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb\") " pod="calico-system/calico-kube-controllers-656f8cbb56-jj5lc" Nov 12 20:48:07.962586 kubelet[2547]: I1112 20:48:07.961193 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjkz\" (UniqueName: \"kubernetes.io/projected/21429838-1e78-4092-ae72-36d1f86e0ea6-kube-api-access-gbjkz\") pod \"calico-apiserver-65945754fc-qf8qk\" (UID: \"21429838-1e78-4092-ae72-36d1f86e0ea6\") " pod="calico-apiserver/calico-apiserver-65945754fc-qf8qk" Nov 12 20:48:07.963398 kubelet[2547]: I1112 20:48:07.961223 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5nd\" (UniqueName: \"kubernetes.io/projected/52481471-f033-4cb1-b92c-d14ac3414abb-kube-api-access-sj5nd\") pod \"calico-apiserver-65945754fc-677z5\" (UID: \"52481471-f033-4cb1-b92c-d14ac3414abb\") " pod="calico-apiserver/calico-apiserver-65945754fc-677z5" Nov 12 20:48:07.963398 kubelet[2547]: I1112 20:48:07.961268 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmff\" (UniqueName: \"kubernetes.io/projected/dcce075f-b91c-4537-baf7-afddd002397a-kube-api-access-7hmff\") pod \"coredns-6f6b679f8f-jn6hw\" (UID: \"dcce075f-b91c-4537-baf7-afddd002397a\") " pod="kube-system/coredns-6f6b679f8f-jn6hw" Nov 12 20:48:07.963398 kubelet[2547]: I1112 20:48:07.961296 2547 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/52481471-f033-4cb1-b92c-d14ac3414abb-calico-apiserver-certs\") pod \"calico-apiserver-65945754fc-677z5\" (UID: \"52481471-f033-4cb1-b92c-d14ac3414abb\") " pod="calico-apiserver/calico-apiserver-65945754fc-677z5" Nov 12 20:48:08.156655 containerd[1464]: time="2024-11-12T20:48:08.156596111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fzbmz,Uid:f64531b6-17e8-4c48-9009-e59e3b3fc041,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:08.233114 containerd[1464]: time="2024-11-12T20:48:08.232967302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-qf8qk,Uid:21429838-1e78-4092-ae72-36d1f86e0ea6,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:48:08.233770 containerd[1464]: time="2024-11-12T20:48:08.232967333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-677z5,Uid:52481471-f033-4cb1-b92c-d14ac3414abb,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:48:08.233770 containerd[1464]: time="2024-11-12T20:48:08.233503026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn6hw,Uid:dcce075f-b91c-4537-baf7-afddd002397a,Namespace:kube-system,Attempt:0,}" Nov 12 20:48:08.233770 containerd[1464]: time="2024-11-12T20:48:08.233603537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656f8cbb56-jj5lc,Uid:a33ec532-6859-4a0f-a2fd-edeb7f28ebcb,Namespace:calico-system,Attempt:0,}" Nov 12 20:48:08.779410 containerd[1464]: time="2024-11-12T20:48:08.779263191Z" level=info msg="shim disconnected" id=e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd namespace=k8s.io Nov 12 20:48:08.779410 containerd[1464]: time="2024-11-12T20:48:08.779358583Z" level=warning msg="cleaning up after shim disconnected" id=e2077ba139b309395e4e15fdab12b1398a88247155b5e644dd0b3416367187cd namespace=k8s.io Nov 12 20:48:08.779410 containerd[1464]: time="2024-11-12T20:48:08.779374828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:48:08.958237 systemd[1]: Created slice kubepods-besteffort-pod135ae93f_39f0_4df8_85fb_bb23f14dc7a4.slice - libcontainer container kubepods-besteffort-pod135ae93f_39f0_4df8_85fb_bb23f14dc7a4.slice. Nov 12 20:48:08.967908 containerd[1464]: time="2024-11-12T20:48:08.967503263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4qsw,Uid:135ae93f-39f0-4df8-85fb-bb23f14dc7a4,Namespace:calico-system,Attempt:0,}" Nov 12 20:48:09.112500 containerd[1464]: time="2024-11-12T20:48:09.112226628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:48:09.186788 containerd[1464]: time="2024-11-12T20:48:09.186715639Z" level=error msg="Failed to destroy network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.189180 containerd[1464]: time="2024-11-12T20:48:09.189123404Z" level=error msg="encountered an error cleaning up failed sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.189704 containerd[1464]: time="2024-11-12T20:48:09.189652865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fzbmz,Uid:f64531b6-17e8-4c48-9009-e59e3b3fc041,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.190311 kubelet[2547]: E1112 20:48:09.190246 2547 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.190981 kubelet[2547]: E1112 20:48:09.190945 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fzbmz" Nov 12 20:48:09.192694 kubelet[2547]: E1112 20:48:09.191598 2547 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-fzbmz" Nov 12 20:48:09.192694 kubelet[2547]: E1112 20:48:09.191696 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-fzbmz_kube-system(f64531b6-17e8-4c48-9009-e59e3b3fc041)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-fzbmz_kube-system(f64531b6-17e8-4c48-9009-e59e3b3fc041)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fzbmz" podUID="f64531b6-17e8-4c48-9009-e59e3b3fc041" Nov 12 20:48:09.201361 containerd[1464]: time="2024-11-12T20:48:09.201291342Z" level=error msg="Failed to destroy network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.202090 containerd[1464]: time="2024-11-12T20:48:09.202045187Z" level=error msg="encountered an error cleaning up failed sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.202373 containerd[1464]: time="2024-11-12T20:48:09.202321978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-qf8qk,Uid:21429838-1e78-4092-ae72-36d1f86e0ea6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.202995 kubelet[2547]: E1112 20:48:09.202951 2547 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.203224 kubelet[2547]: E1112 20:48:09.203194 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65945754fc-qf8qk" Nov 12 20:48:09.203458 kubelet[2547]: E1112 20:48:09.203323 2547 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65945754fc-qf8qk" Nov 12 20:48:09.206344 kubelet[2547]: E1112 20:48:09.203425 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65945754fc-qf8qk_calico-apiserver(21429838-1e78-4092-ae72-36d1f86e0ea6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65945754fc-qf8qk_calico-apiserver(21429838-1e78-4092-ae72-36d1f86e0ea6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65945754fc-qf8qk" podUID="21429838-1e78-4092-ae72-36d1f86e0ea6" Nov 12 20:48:09.217765 containerd[1464]: time="2024-11-12T20:48:09.217698054Z" level=error msg="Failed to destroy network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.218198 containerd[1464]: time="2024-11-12T20:48:09.218153200Z" level=error msg="encountered an error cleaning up failed sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.218322 containerd[1464]: time="2024-11-12T20:48:09.218237046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656f8cbb56-jj5lc,Uid:a33ec532-6859-4a0f-a2fd-edeb7f28ebcb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.219756 kubelet[2547]: E1112 20:48:09.218533 2547 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.219756 kubelet[2547]: E1112 20:48:09.218634 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-656f8cbb56-jj5lc" Nov 12 20:48:09.219756 kubelet[2547]: E1112 20:48:09.218668 2547 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-656f8cbb56-jj5lc" Nov 12 20:48:09.219973 kubelet[2547]: E1112 20:48:09.218755 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-656f8cbb56-jj5lc_calico-system(a33ec532-6859-4a0f-a2fd-edeb7f28ebcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-656f8cbb56-jj5lc_calico-system(a33ec532-6859-4a0f-a2fd-edeb7f28ebcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-656f8cbb56-jj5lc" podUID="a33ec532-6859-4a0f-a2fd-edeb7f28ebcb" Nov 12 20:48:09.223893 containerd[1464]: time="2024-11-12T20:48:09.223685886Z" level=error msg="Failed to destroy network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.224810 containerd[1464]: time="2024-11-12T20:48:09.224589299Z" level=error msg="encountered an error cleaning up failed sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.224810 containerd[1464]: time="2024-11-12T20:48:09.224680773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-677z5,Uid:52481471-f033-4cb1-b92c-d14ac3414abb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.226578 kubelet[2547]: E1112 20:48:09.225894 2547 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.226578 kubelet[2547]: E1112 20:48:09.226090 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65945754fc-677z5" Nov 12 20:48:09.226578 kubelet[2547]: E1112 20:48:09.226184 2547 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65945754fc-677z5" Nov 12 20:48:09.226875 kubelet[2547]: E1112 20:48:09.226392 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65945754fc-677z5_calico-apiserver(52481471-f033-4cb1-b92c-d14ac3414abb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65945754fc-677z5_calico-apiserver(52481471-f033-4cb1-b92c-d14ac3414abb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65945754fc-677z5" podUID="52481471-f033-4cb1-b92c-d14ac3414abb" Nov 12 20:48:09.235259 containerd[1464]: time="2024-11-12T20:48:09.235201207Z" level=error msg="Failed to destroy network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.235865 containerd[1464]: time="2024-11-12T20:48:09.235814433Z" level=error msg="encountered an error cleaning up failed sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.235991 containerd[1464]: time="2024-11-12T20:48:09.235893912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn6hw,Uid:dcce075f-b91c-4537-baf7-afddd002397a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.236617 kubelet[2547]: E1112 20:48:09.236174 2547 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.236617 kubelet[2547]: E1112 20:48:09.236249 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jn6hw" Nov 12 20:48:09.236617 kubelet[2547]: E1112 20:48:09.236282 2547 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-jn6hw" Nov 12 20:48:09.236853 kubelet[2547]: E1112 20:48:09.236357 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-jn6hw_kube-system(dcce075f-b91c-4537-baf7-afddd002397a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-jn6hw_kube-system(dcce075f-b91c-4537-baf7-afddd002397a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jn6hw" podUID="dcce075f-b91c-4537-baf7-afddd002397a" Nov 12 20:48:09.265506 containerd[1464]: time="2024-11-12T20:48:09.265428513Z" level=error msg="Failed to destroy network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.265964 containerd[1464]: time="2024-11-12T20:48:09.265903732Z" level=error msg="encountered an error cleaning up failed sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.266124 containerd[1464]: time="2024-11-12T20:48:09.265991265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4qsw,Uid:135ae93f-39f0-4df8-85fb-bb23f14dc7a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.266335 kubelet[2547]: E1112 20:48:09.266258 2547 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:09.266421 kubelet[2547]: E1112 20:48:09.266354 2547 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j4qsw" Nov 12 20:48:09.266421 kubelet[2547]: E1112 20:48:09.266386 2547 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j4qsw" Nov 12 20:48:09.266539 kubelet[2547]: E1112 20:48:09.266448 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j4qsw_calico-system(135ae93f-39f0-4df8-85fb-bb23f14dc7a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j4qsw_calico-system(135ae93f-39f0-4df8-85fb-bb23f14dc7a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j4qsw" podUID="135ae93f-39f0-4df8-85fb-bb23f14dc7a4" Nov 12 20:48:09.736353 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc-shm.mount: Deactivated successfully. Nov 12 20:48:09.736490 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54-shm.mount: Deactivated successfully. Nov 12 20:48:09.736616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f-shm.mount: Deactivated successfully. Nov 12 20:48:10.103228 kubelet[2547]: I1112 20:48:10.103177 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:10.107444 containerd[1464]: time="2024-11-12T20:48:10.105015459Z" level=info msg="StopPodSandbox for \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\"" Nov 12 20:48:10.107444 containerd[1464]: time="2024-11-12T20:48:10.105272561Z" level=info msg="Ensure that sandbox 15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca in task-service has been cleanup successfully" Nov 12 20:48:10.126802 kubelet[2547]: I1112 20:48:10.126768 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:10.129207 containerd[1464]: time="2024-11-12T20:48:10.129103475Z" level=info msg="StopPodSandbox for \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\"" Nov 12 20:48:10.129532 containerd[1464]: time="2024-11-12T20:48:10.129389095Z" level=info msg="Ensure that sandbox 440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f in task-service has been cleanup successfully" Nov 12 20:48:10.135688 kubelet[2547]: I1112 20:48:10.135589 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:10.138860 containerd[1464]: time="2024-11-12T20:48:10.138595711Z" level=info msg="StopPodSandbox for \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\"" Nov 12 20:48:10.143169 containerd[1464]: time="2024-11-12T20:48:10.143111466Z" level=info msg="Ensure that sandbox 05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83 in task-service has been cleanup successfully" Nov 12 20:48:10.152597 kubelet[2547]: I1112 20:48:10.152123 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:10.158473 containerd[1464]: time="2024-11-12T20:48:10.157770644Z" level=info msg="StopPodSandbox for \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\"" Nov 12 20:48:10.158473 containerd[1464]: time="2024-11-12T20:48:10.158096795Z" level=info msg="Ensure that sandbox 07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d in task-service has been cleanup successfully" Nov 12 20:48:10.167618 kubelet[2547]: I1112 20:48:10.166872 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:10.168314 containerd[1464]: time="2024-11-12T20:48:10.168273969Z" level=info msg="StopPodSandbox for \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\"" Nov 12 20:48:10.169501 containerd[1464]: time="2024-11-12T20:48:10.169467230Z" level=info msg="Ensure that sandbox 69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc in task-service has been cleanup successfully" Nov 12 20:48:10.184773 kubelet[2547]: I1112 20:48:10.184738 2547 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:10.190115 containerd[1464]: time="2024-11-12T20:48:10.190074305Z" level=info msg="StopPodSandbox for \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\"" Nov 12 20:48:10.190791 containerd[1464]: time="2024-11-12T20:48:10.190756312Z" level=info msg="Ensure that sandbox e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54 in task-service has been cleanup successfully" Nov 12 20:48:10.330636 containerd[1464]: time="2024-11-12T20:48:10.330551094Z" level=error msg="StopPodSandbox for \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\" failed" error="failed to destroy network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:10.330931 kubelet[2547]: E1112 20:48:10.330867 2547 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:10.331453 kubelet[2547]: E1112 20:48:10.330956 2547 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54"} Nov 12 20:48:10.331453 kubelet[2547]: E1112 20:48:10.331304 2547 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21429838-1e78-4092-ae72-36d1f86e0ea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:48:10.331453 kubelet[2547]: E1112 20:48:10.331346 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21429838-1e78-4092-ae72-36d1f86e0ea6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65945754fc-qf8qk" podUID="21429838-1e78-4092-ae72-36d1f86e0ea6" Nov 12 20:48:10.336793 containerd[1464]: time="2024-11-12T20:48:10.336731912Z" level=error msg="StopPodSandbox for \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\" failed" error="failed to destroy network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:10.337037 kubelet[2547]: E1112 20:48:10.336989 2547 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:10.337131 kubelet[2547]: E1112 20:48:10.337053 2547 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f"} Nov 12 20:48:10.337131 kubelet[2547]: E1112 20:48:10.337103 2547 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f64531b6-17e8-4c48-9009-e59e3b3fc041\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:48:10.337297 kubelet[2547]: E1112 20:48:10.337142 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f64531b6-17e8-4c48-9009-e59e3b3fc041\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-fzbmz" podUID="f64531b6-17e8-4c48-9009-e59e3b3fc041" Nov 12 20:48:10.341733 containerd[1464]: time="2024-11-12T20:48:10.341682750Z" level=error msg="StopPodSandbox for \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\" failed" error="failed to destroy network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:10.342330 kubelet[2547]: E1112 20:48:10.342146 2547 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:10.342330 kubelet[2547]: E1112 20:48:10.342202 2547 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca"} Nov 12 20:48:10.342330 kubelet[2547]: E1112 20:48:10.342250 2547 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:48:10.342330 kubelet[2547]: E1112 20:48:10.342286 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-656f8cbb56-jj5lc" podUID="a33ec532-6859-4a0f-a2fd-edeb7f28ebcb" Nov 12 20:48:10.357116 containerd[1464]: time="2024-11-12T20:48:10.355705858Z" level=error msg="StopPodSandbox for \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\" failed" error="failed to destroy network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:10.357116 containerd[1464]: time="2024-11-12T20:48:10.356981357Z" level=error msg="StopPodSandbox for \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\" failed" error="failed to destroy network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:10.357597 kubelet[2547]: E1112 20:48:10.355994 2547 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:10.357597 kubelet[2547]: E1112 20:48:10.356057 2547 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83"} Nov 12 20:48:10.357597 kubelet[2547]: E1112 20:48:10.356103 2547 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"135ae93f-39f0-4df8-85fb-bb23f14dc7a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:48:10.357597 kubelet[2547]: E1112 20:48:10.356137 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"135ae93f-39f0-4df8-85fb-bb23f14dc7a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j4qsw" podUID="135ae93f-39f0-4df8-85fb-bb23f14dc7a4" Nov 12 20:48:10.357939 kubelet[2547]: E1112 20:48:10.357499 2547 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:10.357939 kubelet[2547]: E1112 20:48:10.357730 2547 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d"} Nov 12 20:48:10.357939 kubelet[2547]: E1112 20:48:10.357814 2547 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dcce075f-b91c-4537-baf7-afddd002397a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:48:10.360434 kubelet[2547]: E1112 20:48:10.358667 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dcce075f-b91c-4537-baf7-afddd002397a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-jn6hw" podUID="dcce075f-b91c-4537-baf7-afddd002397a" Nov 12 20:48:10.368030 containerd[1464]: time="2024-11-12T20:48:10.367961689Z" level=error msg="StopPodSandbox for \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\" failed" error="failed to destroy network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:48:10.368666 kubelet[2547]: E1112 20:48:10.368274 2547 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:10.368666 kubelet[2547]: E1112 20:48:10.368335 2547 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc"} Nov 12 20:48:10.368666 kubelet[2547]: E1112 20:48:10.368384 2547 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52481471-f033-4cb1-b92c-d14ac3414abb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:48:10.368666 kubelet[2547]: E1112 20:48:10.368426 2547 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52481471-f033-4cb1-b92c-d14ac3414abb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65945754fc-677z5" podUID="52481471-f033-4cb1-b92c-d14ac3414abb" Nov 12 20:48:11.254479 kubelet[2547]: I1112 20:48:11.253900 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:48:16.151063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2618858493.mount: Deactivated successfully. Nov 12 20:48:16.193246 containerd[1464]: time="2024-11-12T20:48:16.193171649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:16.194415 containerd[1464]: time="2024-11-12T20:48:16.194341724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:48:16.195952 containerd[1464]: time="2024-11-12T20:48:16.195856773Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:16.199171 containerd[1464]: time="2024-11-12T20:48:16.199096446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:16.200781 containerd[1464]: time="2024-11-12T20:48:16.199971283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 7.087461742s" Nov 12 20:48:16.200781 containerd[1464]: time="2024-11-12T20:48:16.200020794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:48:16.221888 containerd[1464]: time="2024-11-12T20:48:16.221836421Z" level=info msg="CreateContainer within sandbox \"49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:48:16.250955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197551522.mount: Deactivated successfully. Nov 12 20:48:16.252459 containerd[1464]: time="2024-11-12T20:48:16.252410889Z" level=info msg="CreateContainer within sandbox \"49101dfb6c0e6385fbac44b8eb1072d171efece58f6ca359f0d6e219330fb6b7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0dd368a7d5f625e20e17276b04e8d9a786fdf7d32164b258759e86dcd02d71f\"" Nov 12 20:48:16.255226 containerd[1464]: time="2024-11-12T20:48:16.253790652Z" level=info msg="StartContainer for \"e0dd368a7d5f625e20e17276b04e8d9a786fdf7d32164b258759e86dcd02d71f\"" Nov 12 20:48:16.296856 systemd[1]: Started cri-containerd-e0dd368a7d5f625e20e17276b04e8d9a786fdf7d32164b258759e86dcd02d71f.scope - libcontainer container e0dd368a7d5f625e20e17276b04e8d9a786fdf7d32164b258759e86dcd02d71f. Nov 12 20:48:16.340386 containerd[1464]: time="2024-11-12T20:48:16.339136154Z" level=info msg="StartContainer for \"e0dd368a7d5f625e20e17276b04e8d9a786fdf7d32164b258759e86dcd02d71f\" returns successfully" Nov 12 20:48:16.450980 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:48:16.451140 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:48:18.309597 kernel: bpftool[3820]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:48:18.608267 systemd-networkd[1376]: vxlan.calico: Link UP Nov 12 20:48:18.608283 systemd-networkd[1376]: vxlan.calico: Gained carrier Nov 12 20:48:19.868038 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Nov 12 20:48:20.941499 containerd[1464]: time="2024-11-12T20:48:20.941271414Z" level=info msg="StopPodSandbox for \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\"" Nov 12 20:48:21.013009 kubelet[2547]: I1112 20:48:21.012315 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5fr48" podStartSLOduration=6.069775691 podStartE2EDuration="23.012288263s" podCreationTimestamp="2024-11-12 20:47:58 +0000 UTC" firstStartedPulling="2024-11-12 20:47:59.258646298 +0000 UTC m=+15.478039096" lastFinishedPulling="2024-11-12 20:48:16.201158885 +0000 UTC m=+32.420551668" observedRunningTime="2024-11-12 20:48:17.248461082 +0000 UTC m=+33.467853891" watchObservedRunningTime="2024-11-12 20:48:21.012288263 +0000 UTC m=+37.231681133" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.016 [INFO][3911] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.016 [INFO][3911] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" iface="eth0" netns="/var/run/netns/cni-c4638ee4-26cf-eb33-2d3f-f37c68f93969" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.016 [INFO][3911] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" iface="eth0" netns="/var/run/netns/cni-c4638ee4-26cf-eb33-2d3f-f37c68f93969" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.017 [INFO][3911] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" iface="eth0" netns="/var/run/netns/cni-c4638ee4-26cf-eb33-2d3f-f37c68f93969" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.017 [INFO][3911] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.017 [INFO][3911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.045 [INFO][3917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.045 [INFO][3917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.046 [INFO][3917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.054 [WARNING][3917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.054 [INFO][3917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.057 [INFO][3917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:21.061490 containerd[1464]: 2024-11-12 20:48:21.059 [INFO][3911] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:21.064609 containerd[1464]: time="2024-11-12T20:48:21.063076947Z" level=info msg="TearDown network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\" successfully" Nov 12 20:48:21.064609 containerd[1464]: time="2024-11-12T20:48:21.063126627Z" level=info msg="StopPodSandbox for \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\" returns successfully" Nov 12 20:48:21.066679 containerd[1464]: time="2024-11-12T20:48:21.065077186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4qsw,Uid:135ae93f-39f0-4df8-85fb-bb23f14dc7a4,Namespace:calico-system,Attempt:1,}" Nov 12 20:48:21.070248 systemd[1]: run-netns-cni\x2dc4638ee4\x2d26cf\x2deb33\x2d2d3f\x2df37c68f93969.mount: Deactivated successfully. Nov 12 20:48:21.238066 systemd-networkd[1376]: cali0772a311ab2: Link UP Nov 12 20:48:21.242105 systemd-networkd[1376]: cali0772a311ab2: Gained carrier Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.135 [INFO][3923] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0 csi-node-driver- calico-system 135ae93f-39f0-4df8-85fb-bb23f14dc7a4 766 0 2024-11-12 20:47:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:548d65b7bf k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal csi-node-driver-j4qsw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0772a311ab2 [] []}} ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.138 [INFO][3923] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.179 [INFO][3935] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" HandleID="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.190 [INFO][3935] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" HandleID="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003bd330), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", "pod":"csi-node-driver-j4qsw", "timestamp":"2024-11-12 20:48:21.179717522 +0000 UTC"}, Hostname:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.190 [INFO][3935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.190 [INFO][3935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.190 [INFO][3935] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal' Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.193 [INFO][3935] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.198 [INFO][3935] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.205 [INFO][3935] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.207 [INFO][3935] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.211 [INFO][3935] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.211 [INFO][3935] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.213 [INFO][3935] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8 Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.218 [INFO][3935] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.229 [INFO][3935] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.129/26] block=192.168.125.128/26 handle="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.229 [INFO][3935] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.129/26] handle="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.229 [INFO][3935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:21.259598 containerd[1464]: 2024-11-12 20:48:21.229 [INFO][3935] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.129/26] IPv6=[] ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" HandleID="k8s-pod-network.74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.260774 containerd[1464]: 2024-11-12 20:48:21.232 [INFO][3923] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135ae93f-39f0-4df8-85fb-bb23f14dc7a4", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-j4qsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0772a311ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:21.260774 containerd[1464]: 2024-11-12 20:48:21.232 [INFO][3923] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.129/32] ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.260774 containerd[1464]: 2024-11-12 20:48:21.232 [INFO][3923] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0772a311ab2 ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.260774 containerd[1464]: 2024-11-12 20:48:21.234 [INFO][3923] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.260774 containerd[1464]: 2024-11-12 20:48:21.235 [INFO][3923] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135ae93f-39f0-4df8-85fb-bb23f14dc7a4", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8", Pod:"csi-node-driver-j4qsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0772a311ab2", MAC:"d6:1c:f6:11:4b:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:21.260774 containerd[1464]: 2024-11-12 20:48:21.253 [INFO][3923] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8" Namespace="calico-system" Pod="csi-node-driver-j4qsw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:21.305433 containerd[1464]: time="2024-11-12T20:48:21.304362798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:21.305433 containerd[1464]: time="2024-11-12T20:48:21.304443188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:21.305433 containerd[1464]: time="2024-11-12T20:48:21.304468737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:21.305433 containerd[1464]: time="2024-11-12T20:48:21.304949301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:21.343421 systemd[1]: run-containerd-runc-k8s.io-74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8-runc.yrwbht.mount: Deactivated successfully. Nov 12 20:48:21.354875 systemd[1]: Started cri-containerd-74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8.scope - libcontainer container 74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8. Nov 12 20:48:21.394520 containerd[1464]: time="2024-11-12T20:48:21.394468627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j4qsw,Uid:135ae93f-39f0-4df8-85fb-bb23f14dc7a4,Namespace:calico-system,Attempt:1,} returns sandbox id \"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8\"" Nov 12 20:48:21.398400 containerd[1464]: time="2024-11-12T20:48:21.398150522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:48:21.944200 containerd[1464]: time="2024-11-12T20:48:21.942033346Z" level=info msg="StopPodSandbox for \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\"" Nov 12 20:48:21.945171 containerd[1464]: time="2024-11-12T20:48:21.945131643Z" level=info msg="StopPodSandbox for \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\"" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.036 [INFO][4022] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.037 [INFO][4022] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" iface="eth0" netns="/var/run/netns/cni-d12802f0-35a4-6924-d039-c64ad9cd45ad" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.039 [INFO][4022] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" iface="eth0" netns="/var/run/netns/cni-d12802f0-35a4-6924-d039-c64ad9cd45ad" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.039 [INFO][4022] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" iface="eth0" netns="/var/run/netns/cni-d12802f0-35a4-6924-d039-c64ad9cd45ad" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.039 [INFO][4022] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.039 [INFO][4022] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.093 [INFO][4034] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.093 [INFO][4034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.093 [INFO][4034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.102 [WARNING][4034] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.102 [INFO][4034] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.105 [INFO][4034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:22.110541 containerd[1464]: 2024-11-12 20:48:22.108 [INFO][4022] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:22.121717 systemd[1]: run-netns-cni\x2dd12802f0\x2d35a4\x2d6924\x2dd039\x2dc64ad9cd45ad.mount: Deactivated successfully. Nov 12 20:48:22.122322 containerd[1464]: time="2024-11-12T20:48:22.121700548Z" level=info msg="TearDown network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\" successfully" Nov 12 20:48:22.122322 containerd[1464]: time="2024-11-12T20:48:22.121745766Z" level=info msg="StopPodSandbox for \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\" returns successfully" Nov 12 20:48:22.123900 containerd[1464]: time="2024-11-12T20:48:22.123281513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn6hw,Uid:dcce075f-b91c-4537-baf7-afddd002397a,Namespace:kube-system,Attempt:1,}" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.045 [INFO][4023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.045 [INFO][4023] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" iface="eth0" netns="/var/run/netns/cni-e2361ea2-ed61-c1e3-750f-572350382e33" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.046 [INFO][4023] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" iface="eth0" netns="/var/run/netns/cni-e2361ea2-ed61-c1e3-750f-572350382e33" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.047 [INFO][4023] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" iface="eth0" netns="/var/run/netns/cni-e2361ea2-ed61-c1e3-750f-572350382e33" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.047 [INFO][4023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.047 [INFO][4023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.094 [INFO][4038] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.094 [INFO][4038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.105 [INFO][4038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.117 [WARNING][4038] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.118 [INFO][4038] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.120 [INFO][4038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:22.127745 containerd[1464]: 2024-11-12 20:48:22.126 [INFO][4023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:22.131605 containerd[1464]: time="2024-11-12T20:48:22.130446734Z" level=info msg="TearDown network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\" successfully" Nov 12 20:48:22.131605 containerd[1464]: time="2024-11-12T20:48:22.130481550Z" level=info msg="StopPodSandbox for \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\" returns successfully" Nov 12 20:48:22.135163 containerd[1464]: time="2024-11-12T20:48:22.134726510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-677z5,Uid:52481471-f033-4cb1-b92c-d14ac3414abb,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:48:22.136661 systemd[1]: run-netns-cni\x2de2361ea2\x2ded61\x2dc1e3\x2d750f\x2d572350382e33.mount: Deactivated successfully. Nov 12 20:48:22.422249 systemd-networkd[1376]: cali3b88afa3676: Link UP Nov 12 20:48:22.425453 systemd-networkd[1376]: cali3b88afa3676: Gained carrier Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.260 [INFO][4052] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0 calico-apiserver-65945754fc- calico-apiserver 52481471-f033-4cb1-b92c-d14ac3414abb 777 0 2024-11-12 20:47:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65945754fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal calico-apiserver-65945754fc-677z5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3b88afa3676 [] []}} ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.260 [INFO][4052] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.325 [INFO][4070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" HandleID="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.349 [INFO][4070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" HandleID="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000383f60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", "pod":"calico-apiserver-65945754fc-677z5", "timestamp":"2024-11-12 20:48:22.325422886 +0000 UTC"}, Hostname:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.349 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.349 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.349 [INFO][4070] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal' Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.354 [INFO][4070] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.365 [INFO][4070] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.373 [INFO][4070] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.379 [INFO][4070] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.383 [INFO][4070] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.383 [INFO][4070] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.385 [INFO][4070] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19 Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.392 [INFO][4070] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.404 [INFO][4070] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.130/26] block=192.168.125.128/26 handle="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.405 [INFO][4070] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.130/26] handle="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.405 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:22.455254 containerd[1464]: 2024-11-12 20:48:22.406 [INFO][4070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.130/26] IPv6=[] ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" HandleID="k8s-pod-network.28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.456767 containerd[1464]: 2024-11-12 20:48:22.414 [INFO][4052] cni-plugin/k8s.go 386: Populated endpoint ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52481471-f033-4cb1-b92c-d14ac3414abb", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-65945754fc-677z5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b88afa3676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:22.456767 containerd[1464]: 2024-11-12 20:48:22.414 [INFO][4052] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.130/32] ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.456767 containerd[1464]: 2024-11-12 20:48:22.414 [INFO][4052] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b88afa3676 ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.456767 containerd[1464]: 2024-11-12 20:48:22.426 [INFO][4052] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.456767 containerd[1464]: 2024-11-12 20:48:22.426 [INFO][4052] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52481471-f033-4cb1-b92c-d14ac3414abb", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19", Pod:"calico-apiserver-65945754fc-677z5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b88afa3676", MAC:"da:8f:e0:f7:94:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:22.456767 containerd[1464]: 2024-11-12 20:48:22.442 [INFO][4052] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-677z5" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:22.547087 containerd[1464]: time="2024-11-12T20:48:22.544025148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:22.547087 containerd[1464]: time="2024-11-12T20:48:22.544150342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:22.547087 containerd[1464]: time="2024-11-12T20:48:22.544180135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:22.547087 containerd[1464]: time="2024-11-12T20:48:22.544304654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:22.566218 systemd-networkd[1376]: calie0a9fedb919: Link UP Nov 12 20:48:22.568593 systemd-networkd[1376]: calie0a9fedb919: Gained carrier Nov 12 20:48:22.599707 systemd[1]: Started cri-containerd-28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19.scope - libcontainer container 28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19. Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.281 [INFO][4047] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0 coredns-6f6b679f8f- kube-system dcce075f-b91c-4537-baf7-afddd002397a 776 0 2024-11-12 20:47:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal coredns-6f6b679f8f-jn6hw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0a9fedb919 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.281 [INFO][4047] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.389 [INFO][4074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" HandleID="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.465 [INFO][4074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" HandleID="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-jn6hw", "timestamp":"2024-11-12 20:48:22.388993983 +0000 UTC"}, Hostname:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.465 [INFO][4074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.466 [INFO][4074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.466 [INFO][4074] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal' Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.472 [INFO][4074] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.482 [INFO][4074] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.496 [INFO][4074] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.502 [INFO][4074] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.511 [INFO][4074] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.511 [INFO][4074] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.518 [INFO][4074] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0 Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.535 [INFO][4074] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.550 [INFO][4074] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.131/26] block=192.168.125.128/26 handle="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.550 [INFO][4074] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.131/26] handle="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.550 [INFO][4074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:22.619319 containerd[1464]: 2024-11-12 20:48:22.550 [INFO][4074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.131/26] IPv6=[] ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" HandleID="k8s-pod-network.0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.621298 containerd[1464]: 2024-11-12 20:48:22.554 [INFO][4047] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dcce075f-b91c-4537-baf7-afddd002397a", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-jn6hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0a9fedb919", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:22.621298 containerd[1464]: 2024-11-12 20:48:22.554 [INFO][4047] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.131/32] ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.621298 containerd[1464]: 2024-11-12 20:48:22.555 [INFO][4047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0a9fedb919 ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.621298 containerd[1464]: 2024-11-12 20:48:22.572 [INFO][4047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.621298 containerd[1464]: 2024-11-12 20:48:22.576 [INFO][4047] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dcce075f-b91c-4537-baf7-afddd002397a", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0", Pod:"coredns-6f6b679f8f-jn6hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0a9fedb919", MAC:"4a:d3:78:53:90:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:22.621298 containerd[1464]: 2024-11-12 20:48:22.604 [INFO][4047] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0" Namespace="kube-system" Pod="coredns-6f6b679f8f-jn6hw" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:22.698241 containerd[1464]: time="2024-11-12T20:48:22.697015875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:22.698241 containerd[1464]: time="2024-11-12T20:48:22.697106140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:22.698241 containerd[1464]: time="2024-11-12T20:48:22.697132877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:22.698241 containerd[1464]: time="2024-11-12T20:48:22.697278176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:22.744802 systemd[1]: Started cri-containerd-0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0.scope - libcontainer container 0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0. Nov 12 20:48:22.787007 containerd[1464]: time="2024-11-12T20:48:22.786937118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-677z5,Uid:52481471-f033-4cb1-b92c-d14ac3414abb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19\"" Nov 12 20:48:22.811786 systemd-networkd[1376]: cali0772a311ab2: Gained IPv6LL Nov 12 20:48:22.855009 containerd[1464]: time="2024-11-12T20:48:22.854956800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jn6hw,Uid:dcce075f-b91c-4537-baf7-afddd002397a,Namespace:kube-system,Attempt:1,} returns sandbox id \"0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0\"" Nov 12 20:48:22.859513 containerd[1464]: time="2024-11-12T20:48:22.859470910Z" level=info msg="CreateContainer within sandbox \"0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:48:22.878771 containerd[1464]: time="2024-11-12T20:48:22.878664100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:22.883894 containerd[1464]: time="2024-11-12T20:48:22.883828482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:48:22.885517 containerd[1464]: time="2024-11-12T20:48:22.885472987Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:22.892819 containerd[1464]: time="2024-11-12T20:48:22.891661810Z" level=info msg="CreateContainer within sandbox \"0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b3021e7d9525e7de4fdebb87a9c5ce033f8e6d7b6428b95212cb4c8a0445c23\"" Nov 12 20:48:22.892819 containerd[1464]: time="2024-11-12T20:48:22.891867724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:22.892819 containerd[1464]: time="2024-11-12T20:48:22.892686613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.494486161s" Nov 12 20:48:22.892819 containerd[1464]: time="2024-11-12T20:48:22.892725180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:48:22.895223 containerd[1464]: time="2024-11-12T20:48:22.895191790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:48:22.896283 containerd[1464]: time="2024-11-12T20:48:22.896247209Z" level=info msg="CreateContainer within sandbox \"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:48:22.896599 containerd[1464]: time="2024-11-12T20:48:22.896500748Z" level=info msg="StartContainer for \"2b3021e7d9525e7de4fdebb87a9c5ce033f8e6d7b6428b95212cb4c8a0445c23\"" Nov 12 20:48:22.929690 containerd[1464]: time="2024-11-12T20:48:22.929639464Z" level=info msg="CreateContainer within sandbox \"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8bcc69b592acacd577a759803764239569da2ce103040e9c8b6c6bcf821a90f2\"" Nov 12 20:48:22.931855 containerd[1464]: time="2024-11-12T20:48:22.931759005Z" level=info msg="StartContainer for \"8bcc69b592acacd577a759803764239569da2ce103040e9c8b6c6bcf821a90f2\"" Nov 12 20:48:22.945798 systemd[1]: Started cri-containerd-2b3021e7d9525e7de4fdebb87a9c5ce033f8e6d7b6428b95212cb4c8a0445c23.scope - libcontainer container 2b3021e7d9525e7de4fdebb87a9c5ce033f8e6d7b6428b95212cb4c8a0445c23. Nov 12 20:48:22.992841 systemd[1]: Started cri-containerd-8bcc69b592acacd577a759803764239569da2ce103040e9c8b6c6bcf821a90f2.scope - libcontainer container 8bcc69b592acacd577a759803764239569da2ce103040e9c8b6c6bcf821a90f2. Nov 12 20:48:23.009946 containerd[1464]: time="2024-11-12T20:48:23.009866891Z" level=info msg="StartContainer for \"2b3021e7d9525e7de4fdebb87a9c5ce033f8e6d7b6428b95212cb4c8a0445c23\" returns successfully" Nov 12 20:48:23.068397 containerd[1464]: time="2024-11-12T20:48:23.068326187Z" level=info msg="StartContainer for \"8bcc69b592acacd577a759803764239569da2ce103040e9c8b6c6bcf821a90f2\" returns successfully" Nov 12 20:48:23.275750 kubelet[2547]: I1112 20:48:23.275550 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jn6hw" podStartSLOduration=32.275523362 podStartE2EDuration="32.275523362s" podCreationTimestamp="2024-11-12 20:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:23.274246618 +0000 UTC m=+39.493639427" watchObservedRunningTime="2024-11-12 20:48:23.275523362 +0000 UTC m=+39.494916171" Nov 12 20:48:23.580253 systemd-networkd[1376]: calie0a9fedb919: Gained IPv6LL Nov 12 20:48:23.943551 containerd[1464]: time="2024-11-12T20:48:23.942760418Z" level=info msg="StopPodSandbox for \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\"" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.031 [INFO][4285] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.031 [INFO][4285] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" iface="eth0" netns="/var/run/netns/cni-9133b033-2879-3bc5-319a-1063173063b5" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.032 [INFO][4285] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" iface="eth0" netns="/var/run/netns/cni-9133b033-2879-3bc5-319a-1063173063b5" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.032 [INFO][4285] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" iface="eth0" netns="/var/run/netns/cni-9133b033-2879-3bc5-319a-1063173063b5" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.032 [INFO][4285] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.032 [INFO][4285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.096 [INFO][4299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.097 [INFO][4299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.097 [INFO][4299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.110 [WARNING][4299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.110 [INFO][4299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.113 [INFO][4299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:24.123316 containerd[1464]: 2024-11-12 20:48:24.118 [INFO][4285] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:24.123316 containerd[1464]: time="2024-11-12T20:48:24.122884361Z" level=info msg="TearDown network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\" successfully" Nov 12 20:48:24.123316 containerd[1464]: time="2024-11-12T20:48:24.122922471Z" level=info msg="StopPodSandbox for \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\" returns successfully" Nov 12 20:48:24.128692 containerd[1464]: time="2024-11-12T20:48:24.124498489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fzbmz,Uid:f64531b6-17e8-4c48-9009-e59e3b3fc041,Namespace:kube-system,Attempt:1,}" Nov 12 20:48:24.132250 systemd[1]: run-netns-cni\x2d9133b033\x2d2879\x2d3bc5\x2d319a\x2d1063173063b5.mount: Deactivated successfully. Nov 12 20:48:24.156080 systemd-networkd[1376]: cali3b88afa3676: Gained IPv6LL Nov 12 20:48:24.427615 systemd-networkd[1376]: cali43aa47464ce: Link UP Nov 12 20:48:24.427986 systemd-networkd[1376]: cali43aa47464ce: Gained carrier Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.278 [INFO][4306] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0 coredns-6f6b679f8f- kube-system f64531b6-17e8-4c48-9009-e59e3b3fc041 806 0 2024-11-12 20:47:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal coredns-6f6b679f8f-fzbmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43aa47464ce [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.278 [INFO][4306] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.337 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" HandleID="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.360 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" HandleID="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", "pod":"coredns-6f6b679f8f-fzbmz", "timestamp":"2024-11-12 20:48:24.337913958 +0000 UTC"}, Hostname:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.360 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.361 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.361 [INFO][4321] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal' Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.364 [INFO][4321] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.372 [INFO][4321] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.380 [INFO][4321] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.384 [INFO][4321] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.388 [INFO][4321] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.388 [INFO][4321] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.391 [INFO][4321] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397 Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.400 [INFO][4321] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.414 [INFO][4321] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.132/26] block=192.168.125.128/26 handle="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.414 [INFO][4321] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.132/26] handle="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.414 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:24.459401 containerd[1464]: 2024-11-12 20:48:24.414 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.132/26] IPv6=[] ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" HandleID="k8s-pod-network.d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.460634 containerd[1464]: 2024-11-12 20:48:24.416 [INFO][4306] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f64531b6-17e8-4c48-9009-e59e3b3fc041", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-6f6b679f8f-fzbmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43aa47464ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:24.460634 containerd[1464]: 2024-11-12 20:48:24.417 [INFO][4306] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.132/32] ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.460634 containerd[1464]: 2024-11-12 20:48:24.417 [INFO][4306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43aa47464ce ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.460634 containerd[1464]: 2024-11-12 20:48:24.428 [INFO][4306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.460634 containerd[1464]: 2024-11-12 20:48:24.429 [INFO][4306] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f64531b6-17e8-4c48-9009-e59e3b3fc041", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397", Pod:"coredns-6f6b679f8f-fzbmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43aa47464ce", MAC:"92:15:eb:e2:ff:99", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:24.460634 containerd[1464]: 2024-11-12 20:48:24.453 [INFO][4306] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397" Namespace="kube-system" Pod="coredns-6f6b679f8f-fzbmz" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:24.523983 containerd[1464]: time="2024-11-12T20:48:24.522497389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:24.523983 containerd[1464]: time="2024-11-12T20:48:24.522623754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:24.523983 containerd[1464]: time="2024-11-12T20:48:24.522651792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:24.523983 containerd[1464]: time="2024-11-12T20:48:24.522962988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:24.582814 systemd[1]: Started cri-containerd-d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397.scope - libcontainer container d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397. Nov 12 20:48:24.667966 containerd[1464]: time="2024-11-12T20:48:24.667911764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fzbmz,Uid:f64531b6-17e8-4c48-9009-e59e3b3fc041,Namespace:kube-system,Attempt:1,} returns sandbox id \"d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397\"" Nov 12 20:48:24.674751 containerd[1464]: time="2024-11-12T20:48:24.674500959Z" level=info msg="CreateContainer within sandbox \"d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:48:24.701697 containerd[1464]: time="2024-11-12T20:48:24.700818249Z" level=info msg="CreateContainer within sandbox \"d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fbc8aa66b1335e533ddca92045df72002e3624e32484cc0be1398f9d54629666\"" Nov 12 20:48:24.703831 containerd[1464]: time="2024-11-12T20:48:24.703602588Z" level=info msg="StartContainer for \"fbc8aa66b1335e533ddca92045df72002e3624e32484cc0be1398f9d54629666\"" Nov 12 20:48:24.766026 systemd[1]: Started cri-containerd-fbc8aa66b1335e533ddca92045df72002e3624e32484cc0be1398f9d54629666.scope - libcontainer container fbc8aa66b1335e533ddca92045df72002e3624e32484cc0be1398f9d54629666. Nov 12 20:48:24.822263 containerd[1464]: time="2024-11-12T20:48:24.822205057Z" level=info msg="StartContainer for \"fbc8aa66b1335e533ddca92045df72002e3624e32484cc0be1398f9d54629666\" returns successfully" Nov 12 20:48:24.941674 containerd[1464]: time="2024-11-12T20:48:24.941256231Z" level=info msg="StopPodSandbox for \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\"" Nov 12 20:48:24.943430 containerd[1464]: time="2024-11-12T20:48:24.943391492Z" level=info msg="StopPodSandbox for \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\"" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.100 [INFO][4451] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.100 [INFO][4451] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" iface="eth0" netns="/var/run/netns/cni-528a8c5c-eab4-c662-ad62-90dfce3e95f2" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.101 [INFO][4451] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" iface="eth0" netns="/var/run/netns/cni-528a8c5c-eab4-c662-ad62-90dfce3e95f2" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.104 [INFO][4451] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" iface="eth0" netns="/var/run/netns/cni-528a8c5c-eab4-c662-ad62-90dfce3e95f2" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.104 [INFO][4451] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.104 [INFO][4451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.173 [INFO][4461] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.173 [INFO][4461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.173 [INFO][4461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.187 [WARNING][4461] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.188 [INFO][4461] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.193 [INFO][4461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:25.204111 containerd[1464]: 2024-11-12 20:48:25.199 [INFO][4451] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:25.210640 containerd[1464]: time="2024-11-12T20:48:25.208715636Z" level=info msg="TearDown network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\" successfully" Nov 12 20:48:25.210640 containerd[1464]: time="2024-11-12T20:48:25.208772258Z" level=info msg="StopPodSandbox for \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\" returns successfully" Nov 12 20:48:25.212693 systemd[1]: run-netns-cni\x2d528a8c5c\x2deab4\x2dc662\x2dad62\x2d90dfce3e95f2.mount: Deactivated successfully. Nov 12 20:48:25.214260 containerd[1464]: time="2024-11-12T20:48:25.213703336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-qf8qk,Uid:21429838-1e78-4092-ae72-36d1f86e0ea6,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.104 [INFO][4444] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.106 [INFO][4444] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" iface="eth0" netns="/var/run/netns/cni-78b85188-fa8f-def2-9986-2022871a0b69" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.106 [INFO][4444] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" iface="eth0" netns="/var/run/netns/cni-78b85188-fa8f-def2-9986-2022871a0b69" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.106 [INFO][4444] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" iface="eth0" netns="/var/run/netns/cni-78b85188-fa8f-def2-9986-2022871a0b69" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.107 [INFO][4444] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.107 [INFO][4444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.221 [INFO][4462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.221 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.222 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.240 [WARNING][4462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.240 [INFO][4462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.242 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:25.261173 containerd[1464]: 2024-11-12 20:48:25.252 [INFO][4444] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:25.266757 containerd[1464]: time="2024-11-12T20:48:25.266703160Z" level=info msg="TearDown network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\" successfully" Nov 12 20:48:25.266757 containerd[1464]: time="2024-11-12T20:48:25.266757680Z" level=info msg="StopPodSandbox for \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\" returns successfully" Nov 12 20:48:25.267740 systemd[1]: run-netns-cni\x2d78b85188\x2dfa8f\x2ddef2\x2d9986\x2d2022871a0b69.mount: Deactivated successfully. Nov 12 20:48:25.271138 containerd[1464]: time="2024-11-12T20:48:25.269265145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656f8cbb56-jj5lc,Uid:a33ec532-6859-4a0f-a2fd-edeb7f28ebcb,Namespace:calico-system,Attempt:1,}" Nov 12 20:48:25.301609 kubelet[2547]: I1112 20:48:25.301395 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fzbmz" podStartSLOduration=34.301367521 podStartE2EDuration="34.301367521s" podCreationTimestamp="2024-11-12 20:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:48:25.301091781 +0000 UTC m=+41.520484589" watchObservedRunningTime="2024-11-12 20:48:25.301367521 +0000 UTC m=+41.520760332" Nov 12 20:48:25.630830 systemd-networkd[1376]: cali43aa47464ce: Gained IPv6LL Nov 12 20:48:25.632738 systemd-networkd[1376]: cali6b0d05f3618: Link UP Nov 12 20:48:25.635717 systemd-networkd[1376]: cali6b0d05f3618: Gained carrier Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.435 [INFO][4482] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0 calico-kube-controllers-656f8cbb56- calico-system a33ec532-6859-4a0f-a2fd-edeb7f28ebcb 817 0 2024-11-12 20:47:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:656f8cbb56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal calico-kube-controllers-656f8cbb56-jj5lc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6b0d05f3618 [] []}} ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.436 [INFO][4482] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.545 [INFO][4501] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" HandleID="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.563 [INFO][4501] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" HandleID="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036d3c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", "pod":"calico-kube-controllers-656f8cbb56-jj5lc", "timestamp":"2024-11-12 20:48:25.545381051 +0000 UTC"}, Hostname:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.563 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.563 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.563 [INFO][4501] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal' Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.567 [INFO][4501] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.574 [INFO][4501] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.581 [INFO][4501] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.584 [INFO][4501] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.592 [INFO][4501] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.592 [INFO][4501] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.595 [INFO][4501] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.605 [INFO][4501] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.619 [INFO][4501] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.133/26] block=192.168.125.128/26 handle="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.619 [INFO][4501] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.133/26] handle="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.619 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:25.677531 containerd[1464]: 2024-11-12 20:48:25.619 [INFO][4501] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.133/26] IPv6=[] ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" HandleID="k8s-pod-network.90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.678660 containerd[1464]: 2024-11-12 20:48:25.622 [INFO][4482] cni-plugin/k8s.go 386: Populated endpoint ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0", GenerateName:"calico-kube-controllers-656f8cbb56-", Namespace:"calico-system", SelfLink:"", UID:"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"656f8cbb56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-656f8cbb56-jj5lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b0d05f3618", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:25.678660 containerd[1464]: 2024-11-12 20:48:25.623 [INFO][4482] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.133/32] ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.678660 containerd[1464]: 2024-11-12 20:48:25.623 [INFO][4482] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b0d05f3618 ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.678660 containerd[1464]: 2024-11-12 20:48:25.634 [INFO][4482] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.678660 containerd[1464]: 2024-11-12 20:48:25.634 [INFO][4482] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0", GenerateName:"calico-kube-controllers-656f8cbb56-", Namespace:"calico-system", SelfLink:"", UID:"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"656f8cbb56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed", Pod:"calico-kube-controllers-656f8cbb56-jj5lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b0d05f3618", MAC:"3a:49:88:0d:19:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:25.678660 containerd[1464]: 2024-11-12 20:48:25.670 [INFO][4482] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed" Namespace="calico-system" Pod="calico-kube-controllers-656f8cbb56-jj5lc" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:25.779515 containerd[1464]: time="2024-11-12T20:48:25.773111400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:25.779515 containerd[1464]: time="2024-11-12T20:48:25.773205618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:25.779515 containerd[1464]: time="2024-11-12T20:48:25.773231371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:25.779515 containerd[1464]: time="2024-11-12T20:48:25.773374878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:25.808930 systemd-networkd[1376]: cali7f98563a987: Link UP Nov 12 20:48:25.809295 systemd-networkd[1376]: cali7f98563a987: Gained carrier Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.451 [INFO][4473] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0 calico-apiserver-65945754fc- calico-apiserver 21429838-1e78-4092-ae72-36d1f86e0ea6 816 0 2024-11-12 20:47:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65945754fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal calico-apiserver-65945754fc-qf8qk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7f98563a987 [] []}} ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.452 [INFO][4473] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.539 [INFO][4500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" HandleID="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.566 [INFO][4500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" HandleID="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ec4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", "pod":"calico-apiserver-65945754fc-qf8qk", "timestamp":"2024-11-12 20:48:25.539088018 +0000 UTC"}, Hostname:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.566 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.619 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.619 [INFO][4500] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal' Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.675 [INFO][4500] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.692 [INFO][4500] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.709 [INFO][4500] ipam/ipam.go 489: Trying affinity for 192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.718 [INFO][4500] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.725 [INFO][4500] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.725 [INFO][4500] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.744 [INFO][4500] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217 Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.776 [INFO][4500] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.794 [INFO][4500] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.134/26] block=192.168.125.128/26 handle="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.794 [INFO][4500] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.134/26] handle="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" host="ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal" Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.794 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:25.880415 containerd[1464]: 2024-11-12 20:48:25.794 [INFO][4500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.134/26] IPv6=[] ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" HandleID="k8s-pod-network.7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.883524 containerd[1464]: 2024-11-12 20:48:25.800 [INFO][4473] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"21429838-1e78-4092-ae72-36d1f86e0ea6", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-65945754fc-qf8qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f98563a987", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:25.883524 containerd[1464]: 2024-11-12 20:48:25.800 [INFO][4473] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.134/32] ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.883524 containerd[1464]: 2024-11-12 20:48:25.801 [INFO][4473] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f98563a987 ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.883524 containerd[1464]: 2024-11-12 20:48:25.811 [INFO][4473] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.883524 containerd[1464]: 2024-11-12 20:48:25.817 [INFO][4473] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"21429838-1e78-4092-ae72-36d1f86e0ea6", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217", Pod:"calico-apiserver-65945754fc-qf8qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f98563a987", MAC:"d2:14:45:c9:49:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:25.883524 containerd[1464]: 2024-11-12 20:48:25.874 [INFO][4473] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217" Namespace="calico-apiserver" Pod="calico-apiserver-65945754fc-qf8qk" WorkloadEndpoint="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:25.907103 systemd[1]: Started cri-containerd-90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed.scope - libcontainer container 90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed. Nov 12 20:48:25.979464 containerd[1464]: time="2024-11-12T20:48:25.979347793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:48:25.980160 containerd[1464]: time="2024-11-12T20:48:25.979503268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:48:25.980160 containerd[1464]: time="2024-11-12T20:48:25.979621027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:25.980160 containerd[1464]: time="2024-11-12T20:48:25.979809766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:48:26.031832 systemd[1]: Started cri-containerd-7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217.scope - libcontainer container 7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217. Nov 12 20:48:26.190521 containerd[1464]: time="2024-11-12T20:48:26.190234570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-656f8cbb56-jj5lc,Uid:a33ec532-6859-4a0f-a2fd-edeb7f28ebcb,Namespace:calico-system,Attempt:1,} returns sandbox id \"90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed\"" Nov 12 20:48:26.199899 containerd[1464]: time="2024-11-12T20:48:26.199730846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65945754fc-qf8qk,Uid:21429838-1e78-4092-ae72-36d1f86e0ea6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217\"" Nov 12 20:48:26.481435 containerd[1464]: time="2024-11-12T20:48:26.481258077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:26.482992 containerd[1464]: time="2024-11-12T20:48:26.482916172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:48:26.484680 containerd[1464]: time="2024-11-12T20:48:26.484608663Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:26.488036 containerd[1464]: time="2024-11-12T20:48:26.487989075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:26.489532 containerd[1464]: time="2024-11-12T20:48:26.489319823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 3.594084069s" Nov 12 20:48:26.489532 containerd[1464]: time="2024-11-12T20:48:26.489404810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:48:26.492655 containerd[1464]: time="2024-11-12T20:48:26.491945068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:48:26.494199 containerd[1464]: time="2024-11-12T20:48:26.493527191Z" level=info msg="CreateContainer within sandbox \"28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:48:26.513238 containerd[1464]: time="2024-11-12T20:48:26.513053436Z" level=info msg="CreateContainer within sandbox \"28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a87ee604e1c0ed2dc70a3620e722c05a8f7588392c958773a1c92496b836710e\"" Nov 12 20:48:26.515823 containerd[1464]: time="2024-11-12T20:48:26.515782564Z" level=info msg="StartContainer for \"a87ee604e1c0ed2dc70a3620e722c05a8f7588392c958773a1c92496b836710e\"" Nov 12 20:48:26.573341 systemd[1]: run-containerd-runc-k8s.io-a87ee604e1c0ed2dc70a3620e722c05a8f7588392c958773a1c92496b836710e-runc.58KGj6.mount: Deactivated successfully. Nov 12 20:48:26.584818 systemd[1]: Started cri-containerd-a87ee604e1c0ed2dc70a3620e722c05a8f7588392c958773a1c92496b836710e.scope - libcontainer container a87ee604e1c0ed2dc70a3620e722c05a8f7588392c958773a1c92496b836710e. Nov 12 20:48:26.647625 containerd[1464]: time="2024-11-12T20:48:26.647534038Z" level=info msg="StartContainer for \"a87ee604e1c0ed2dc70a3620e722c05a8f7588392c958773a1c92496b836710e\" returns successfully" Nov 12 20:48:27.100023 systemd-networkd[1376]: cali6b0d05f3618: Gained IPv6LL Nov 12 20:48:27.227859 systemd-networkd[1376]: cali7f98563a987: Gained IPv6LL Nov 12 20:48:27.751814 containerd[1464]: time="2024-11-12T20:48:27.751756939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:27.753914 containerd[1464]: time="2024-11-12T20:48:27.753819741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:48:27.755639 containerd[1464]: time="2024-11-12T20:48:27.755511169Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:27.759193 containerd[1464]: time="2024-11-12T20:48:27.759118282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:27.760470 containerd[1464]: time="2024-11-12T20:48:27.760251332Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.268256897s" Nov 12 20:48:27.760470 containerd[1464]: time="2024-11-12T20:48:27.760321340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:48:27.762296 containerd[1464]: time="2024-11-12T20:48:27.762119635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:48:27.765471 containerd[1464]: time="2024-11-12T20:48:27.765226028Z" level=info msg="CreateContainer within sandbox \"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:48:27.794633 containerd[1464]: time="2024-11-12T20:48:27.793906026Z" level=info msg="CreateContainer within sandbox \"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"35d5c58293b94d569c936d6a7d6a1281898a325e6c95101b2cd2861cc2bcb8ac\"" Nov 12 20:48:27.794439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount904913937.mount: Deactivated successfully. Nov 12 20:48:27.796912 containerd[1464]: time="2024-11-12T20:48:27.795880157Z" level=info msg="StartContainer for \"35d5c58293b94d569c936d6a7d6a1281898a325e6c95101b2cd2861cc2bcb8ac\"" Nov 12 20:48:27.850640 systemd[1]: run-containerd-runc-k8s.io-35d5c58293b94d569c936d6a7d6a1281898a325e6c95101b2cd2861cc2bcb8ac-runc.6aspoO.mount: Deactivated successfully. Nov 12 20:48:27.857804 systemd[1]: Started cri-containerd-35d5c58293b94d569c936d6a7d6a1281898a325e6c95101b2cd2861cc2bcb8ac.scope - libcontainer container 35d5c58293b94d569c936d6a7d6a1281898a325e6c95101b2cd2861cc2bcb8ac. Nov 12 20:48:27.905963 containerd[1464]: time="2024-11-12T20:48:27.905906296Z" level=info msg="StartContainer for \"35d5c58293b94d569c936d6a7d6a1281898a325e6c95101b2cd2861cc2bcb8ac\" returns successfully" Nov 12 20:48:28.075678 kubelet[2547]: I1112 20:48:28.074963 2547 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:48:28.075678 kubelet[2547]: I1112 20:48:28.075047 2547 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:48:28.309025 kubelet[2547]: I1112 20:48:28.308984 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:48:28.326099 kubelet[2547]: I1112 20:48:28.325644 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65945754fc-677z5" podStartSLOduration=26.625394564 podStartE2EDuration="30.325617952s" podCreationTimestamp="2024-11-12 20:47:58 +0000 UTC" firstStartedPulling="2024-11-12 20:48:22.790457158 +0000 UTC m=+39.009849955" lastFinishedPulling="2024-11-12 20:48:26.490680546 +0000 UTC m=+42.710073343" observedRunningTime="2024-11-12 20:48:27.336822697 +0000 UTC m=+43.556215504" watchObservedRunningTime="2024-11-12 20:48:28.325617952 +0000 UTC m=+44.545010760" Nov 12 20:48:29.437976 ntpd[1433]: Listen normally on 7 vxlan.calico 192.168.125.128:123 Nov 12 20:48:29.438105 ntpd[1433]: Listen normally on 8 vxlan.calico [fe80::643a:66ff:fefa:e6e5%4]:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 7 vxlan.calico 192.168.125.128:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 8 vxlan.calico [fe80::643a:66ff:fefa:e6e5%4]:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 9 cali0772a311ab2 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 10 cali3b88afa3676 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 11 calie0a9fedb919 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 12 cali43aa47464ce [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 13 cali6b0d05f3618 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:48:29.438725 ntpd[1433]: 12 Nov 20:48:29 ntpd[1433]: Listen normally on 14 cali7f98563a987 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:48:29.438191 ntpd[1433]: Listen normally on 9 cali0772a311ab2 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 20:48:29.438251 ntpd[1433]: Listen normally on 10 cali3b88afa3676 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 20:48:29.438319 ntpd[1433]: Listen normally on 11 calie0a9fedb919 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 20:48:29.438377 ntpd[1433]: Listen normally on 12 cali43aa47464ce [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 20:48:29.438429 ntpd[1433]: Listen normally on 13 cali6b0d05f3618 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 20:48:29.438480 ntpd[1433]: Listen normally on 14 cali7f98563a987 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 20:48:29.954787 containerd[1464]: time="2024-11-12T20:48:29.954718624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:29.956203 containerd[1464]: time="2024-11-12T20:48:29.956126523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:48:29.957719 containerd[1464]: time="2024-11-12T20:48:29.957649108Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:29.961091 containerd[1464]: time="2024-11-12T20:48:29.961019279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:29.962114 containerd[1464]: time="2024-11-12T20:48:29.962042973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.199880206s" Nov 12 20:48:29.962114 containerd[1464]: time="2024-11-12T20:48:29.962107617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:48:29.966015 containerd[1464]: time="2024-11-12T20:48:29.964245473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:48:29.983540 containerd[1464]: time="2024-11-12T20:48:29.983457233Z" level=info msg="CreateContainer within sandbox \"90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:48:30.009204 containerd[1464]: time="2024-11-12T20:48:30.009144981Z" level=info msg="CreateContainer within sandbox \"90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"99ff6150ad8bef7040bae502855cece2a03236d3b757ff2949cd776a932fd2e9\"" Nov 12 20:48:30.010344 containerd[1464]: time="2024-11-12T20:48:30.010277094Z" level=info msg="StartContainer for \"99ff6150ad8bef7040bae502855cece2a03236d3b757ff2949cd776a932fd2e9\"" Nov 12 20:48:30.058834 systemd[1]: Started cri-containerd-99ff6150ad8bef7040bae502855cece2a03236d3b757ff2949cd776a932fd2e9.scope - libcontainer container 99ff6150ad8bef7040bae502855cece2a03236d3b757ff2949cd776a932fd2e9. Nov 12 20:48:30.127249 containerd[1464]: time="2024-11-12T20:48:30.127049022Z" level=info msg="StartContainer for \"99ff6150ad8bef7040bae502855cece2a03236d3b757ff2949cd776a932fd2e9\" returns successfully" Nov 12 20:48:30.232217 containerd[1464]: time="2024-11-12T20:48:30.231677192Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:48:30.235882 containerd[1464]: time="2024-11-12T20:48:30.234714500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:48:30.238267 containerd[1464]: time="2024-11-12T20:48:30.238207239Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 272.021522ms" Nov 12 20:48:30.238267 containerd[1464]: time="2024-11-12T20:48:30.238262131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:48:30.246427 containerd[1464]: time="2024-11-12T20:48:30.243738652Z" level=info msg="CreateContainer within sandbox \"7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:48:30.276418 containerd[1464]: time="2024-11-12T20:48:30.276315351Z" level=info msg="CreateContainer within sandbox \"7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d3592cec2cf28d8afe4c9d6fadd8f93f4a327bccb78a67426697ce7de3aa623\"" Nov 12 20:48:30.277714 containerd[1464]: time="2024-11-12T20:48:30.277677568Z" level=info msg="StartContainer for \"6d3592cec2cf28d8afe4c9d6fadd8f93f4a327bccb78a67426697ce7de3aa623\"" Nov 12 20:48:30.347885 kubelet[2547]: I1112 20:48:30.347165 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-656f8cbb56-jj5lc" podStartSLOduration=28.581111578 podStartE2EDuration="32.347141116s" podCreationTimestamp="2024-11-12 20:47:58 +0000 UTC" firstStartedPulling="2024-11-12 20:48:26.197255728 +0000 UTC m=+42.416648513" lastFinishedPulling="2024-11-12 20:48:29.963285267 +0000 UTC m=+46.182678051" observedRunningTime="2024-11-12 20:48:30.344066743 +0000 UTC m=+46.563459551" watchObservedRunningTime="2024-11-12 20:48:30.347141116 +0000 UTC m=+46.566533949" Nov 12 20:48:30.349741 kubelet[2547]: I1112 20:48:30.349430 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j4qsw" podStartSLOduration=25.984691775 podStartE2EDuration="32.349412767s" podCreationTimestamp="2024-11-12 20:47:58 +0000 UTC" firstStartedPulling="2024-11-12 20:48:21.397148771 +0000 UTC m=+37.616541566" lastFinishedPulling="2024-11-12 20:48:27.761869758 +0000 UTC m=+43.981262558" observedRunningTime="2024-11-12 20:48:28.326156818 +0000 UTC m=+44.545549627" watchObservedRunningTime="2024-11-12 20:48:30.349412767 +0000 UTC m=+46.568805576" Nov 12 20:48:30.382036 systemd[1]: Started cri-containerd-6d3592cec2cf28d8afe4c9d6fadd8f93f4a327bccb78a67426697ce7de3aa623.scope - libcontainer container 6d3592cec2cf28d8afe4c9d6fadd8f93f4a327bccb78a67426697ce7de3aa623. Nov 12 20:48:30.478375 containerd[1464]: time="2024-11-12T20:48:30.478212078Z" level=info msg="StartContainer for \"6d3592cec2cf28d8afe4c9d6fadd8f93f4a327bccb78a67426697ce7de3aa623\" returns successfully" Nov 12 20:48:31.363470 kubelet[2547]: I1112 20:48:31.363237 2547 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65945754fc-qf8qk" podStartSLOduration=29.32760058 podStartE2EDuration="33.363157613s" podCreationTimestamp="2024-11-12 20:47:58 +0000 UTC" firstStartedPulling="2024-11-12 20:48:26.203637446 +0000 UTC m=+42.423030243" lastFinishedPulling="2024-11-12 20:48:30.239194475 +0000 UTC m=+46.458587276" observedRunningTime="2024-11-12 20:48:31.359037545 +0000 UTC m=+47.578430353" watchObservedRunningTime="2024-11-12 20:48:31.363157613 +0000 UTC m=+47.582550423" Nov 12 20:48:32.345015 kubelet[2547]: I1112 20:48:32.344957 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:48:39.943741 systemd[1]: Started sshd@7-10.128.0.68:22-139.178.89.65:41718.service - OpenSSH per-connection server daemon (139.178.89.65:41718). Nov 12 20:48:40.242694 sshd[4838]: Accepted publickey for core from 139.178.89.65 port 41718 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:48:40.245061 sshd[4838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:40.252637 systemd-logind[1454]: New session 8 of user core. Nov 12 20:48:40.260147 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:48:40.588199 sshd[4838]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:40.594426 systemd[1]: sshd@7-10.128.0.68:22-139.178.89.65:41718.service: Deactivated successfully. Nov 12 20:48:40.598096 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:48:40.599393 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:48:40.601176 systemd-logind[1454]: Removed session 8. Nov 12 20:48:43.276332 kubelet[2547]: I1112 20:48:43.276182 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:48:43.959496 containerd[1464]: time="2024-11-12T20:48:43.959012499Z" level=info msg="StopPodSandbox for \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\"" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.008 [WARNING][4909] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135ae93f-39f0-4df8-85fb-bb23f14dc7a4", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8", Pod:"csi-node-driver-j4qsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0772a311ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.009 [INFO][4909] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.009 [INFO][4909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" iface="eth0" netns="" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.009 [INFO][4909] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.009 [INFO][4909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.040 [INFO][4915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.040 [INFO][4915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.040 [INFO][4915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.052 [WARNING][4915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.052 [INFO][4915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.054 [INFO][4915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.057228 containerd[1464]: 2024-11-12 20:48:44.056 [INFO][4909] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.058260 containerd[1464]: time="2024-11-12T20:48:44.057271005Z" level=info msg="TearDown network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\" successfully" Nov 12 20:48:44.058260 containerd[1464]: time="2024-11-12T20:48:44.057305783Z" level=info msg="StopPodSandbox for \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\" returns successfully" Nov 12 20:48:44.059040 containerd[1464]: time="2024-11-12T20:48:44.058987372Z" level=info msg="RemovePodSandbox for \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\"" Nov 12 20:48:44.059040 containerd[1464]: time="2024-11-12T20:48:44.059036889Z" level=info msg="Forcibly stopping sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\"" Nov 12 20:48:44.076829 kubelet[2547]: I1112 20:48:44.076382 2547 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.178 [WARNING][4933] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"135ae93f-39f0-4df8-85fb-bb23f14dc7a4", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"548d65b7bf", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"74bcc9d41d881d993e5cc2e521d100777a3a36ab0810cd7d42cf84e0063bb1f8", Pod:"csi-node-driver-j4qsw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0772a311ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.179 [INFO][4933] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.179 [INFO][4933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" iface="eth0" netns="" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.179 [INFO][4933] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.179 [INFO][4933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.208 [INFO][4941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.209 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.209 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.217 [WARNING][4941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.217 [INFO][4941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" HandleID="k8s-pod-network.05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-csi--node--driver--j4qsw-eth0" Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.219 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.222327 containerd[1464]: 2024-11-12 20:48:44.220 [INFO][4933] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83" Nov 12 20:48:44.222327 containerd[1464]: time="2024-11-12T20:48:44.222274866Z" level=info msg="TearDown network for sandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\" successfully" Nov 12 20:48:44.228816 containerd[1464]: time="2024-11-12T20:48:44.228763772Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:48:44.229311 containerd[1464]: time="2024-11-12T20:48:44.228862263Z" level=info msg="RemovePodSandbox \"05176fa0d29f42800388a674ecbed50db34eaebc5122e754575ce3ec41830d83\" returns successfully" Nov 12 20:48:44.230193 containerd[1464]: time="2024-11-12T20:48:44.229600361Z" level=info msg="StopPodSandbox for \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\"" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.280 [WARNING][4959] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0", GenerateName:"calico-kube-controllers-656f8cbb56-", Namespace:"calico-system", SelfLink:"", UID:"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"656f8cbb56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed", Pod:"calico-kube-controllers-656f8cbb56-jj5lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b0d05f3618", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.281 [INFO][4959] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.281 [INFO][4959] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" iface="eth0" netns="" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.281 [INFO][4959] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.281 [INFO][4959] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.318 [INFO][4965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.318 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.318 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.333 [WARNING][4965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.333 [INFO][4965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.336 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.347156 containerd[1464]: 2024-11-12 20:48:44.345 [INFO][4959] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.349796 containerd[1464]: time="2024-11-12T20:48:44.348391612Z" level=info msg="TearDown network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\" successfully" Nov 12 20:48:44.349796 containerd[1464]: time="2024-11-12T20:48:44.348431666Z" level=info msg="StopPodSandbox for \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\" returns successfully" Nov 12 20:48:44.349796 containerd[1464]: time="2024-11-12T20:48:44.349078477Z" level=info msg="RemovePodSandbox for \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\"" Nov 12 20:48:44.349796 containerd[1464]: time="2024-11-12T20:48:44.349118943Z" level=info msg="Forcibly stopping sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\"" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.419 [WARNING][4984] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0", GenerateName:"calico-kube-controllers-656f8cbb56-", Namespace:"calico-system", SelfLink:"", UID:"a33ec532-6859-4a0f-a2fd-edeb7f28ebcb", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"656f8cbb56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"90dd20d928c7f3fcd932e5862ff1acefa11bc73c0cfefac0edf3d6f2a627eaed", Pod:"calico-kube-controllers-656f8cbb56-jj5lc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b0d05f3618", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.419 [INFO][4984] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.420 [INFO][4984] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" iface="eth0" netns="" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.420 [INFO][4984] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.420 [INFO][4984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.456 [INFO][4990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.456 [INFO][4990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.456 [INFO][4990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.465 [WARNING][4990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.465 [INFO][4990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" HandleID="k8s-pod-network.15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--kube--controllers--656f8cbb56--jj5lc-eth0" Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.467 [INFO][4990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.469687 containerd[1464]: 2024-11-12 20:48:44.468 [INFO][4984] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca" Nov 12 20:48:44.470689 containerd[1464]: time="2024-11-12T20:48:44.470641583Z" level=info msg="TearDown network for sandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\" successfully" Nov 12 20:48:44.476128 containerd[1464]: time="2024-11-12T20:48:44.475939320Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:48:44.476128 containerd[1464]: time="2024-11-12T20:48:44.476031622Z" level=info msg="RemovePodSandbox \"15304e4dd675d5f6a0f1dbebbe03c2e1386dc6e8a28d1af3ca7767d55eb08dca\" returns successfully" Nov 12 20:48:44.478418 containerd[1464]: time="2024-11-12T20:48:44.476623932Z" level=info msg="StopPodSandbox for \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\"" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.525 [WARNING][5008] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dcce075f-b91c-4537-baf7-afddd002397a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0", Pod:"coredns-6f6b679f8f-jn6hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0a9fedb919", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.526 [INFO][5008] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.526 [INFO][5008] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" iface="eth0" netns="" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.526 [INFO][5008] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.526 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.553 [INFO][5015] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.553 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.553 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.564 [WARNING][5015] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.564 [INFO][5015] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.568 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.571315 containerd[1464]: 2024-11-12 20:48:44.569 [INFO][5008] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.573373 containerd[1464]: time="2024-11-12T20:48:44.571372030Z" level=info msg="TearDown network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\" successfully" Nov 12 20:48:44.573373 containerd[1464]: time="2024-11-12T20:48:44.571407658Z" level=info msg="StopPodSandbox for \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\" returns successfully" Nov 12 20:48:44.573373 containerd[1464]: time="2024-11-12T20:48:44.572207747Z" level=info msg="RemovePodSandbox for \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\"" Nov 12 20:48:44.573373 containerd[1464]: time="2024-11-12T20:48:44.572244530Z" level=info msg="Forcibly stopping sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\"" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.634 [WARNING][5033] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"dcce075f-b91c-4537-baf7-afddd002397a", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"0eb1676b22b53b124c445a9cfc01e125aa21855e7f4544654a77c0117d9eb2b0", Pod:"coredns-6f6b679f8f-jn6hw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0a9fedb919", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.636 [INFO][5033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.636 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" iface="eth0" netns="" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.636 [INFO][5033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.636 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.665 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.665 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.665 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.677 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.677 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" HandleID="k8s-pod-network.07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--jn6hw-eth0" Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.684 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.689648 containerd[1464]: 2024-11-12 20:48:44.687 [INFO][5033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d" Nov 12 20:48:44.689648 containerd[1464]: time="2024-11-12T20:48:44.688954012Z" level=info msg="TearDown network for sandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\" successfully" Nov 12 20:48:44.694247 containerd[1464]: time="2024-11-12T20:48:44.694186460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:48:44.694410 containerd[1464]: time="2024-11-12T20:48:44.694271866Z" level=info msg="RemovePodSandbox \"07f9a315af8dd59334f1b28790839afc8618cb781ee61f9f119f082442ad147d\" returns successfully" Nov 12 20:48:44.695476 containerd[1464]: time="2024-11-12T20:48:44.695119147Z" level=info msg="StopPodSandbox for \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\"" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.743 [WARNING][5059] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"21429838-1e78-4092-ae72-36d1f86e0ea6", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217", Pod:"calico-apiserver-65945754fc-qf8qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f98563a987", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.744 [INFO][5059] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.744 [INFO][5059] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" iface="eth0" netns="" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.744 [INFO][5059] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.744 [INFO][5059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.773 [INFO][5065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.773 [INFO][5065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.773 [INFO][5065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.782 [WARNING][5065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.782 [INFO][5065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.784 [INFO][5065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.787629 containerd[1464]: 2024-11-12 20:48:44.785 [INFO][5059] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.787629 containerd[1464]: time="2024-11-12T20:48:44.787508023Z" level=info msg="TearDown network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\" successfully" Nov 12 20:48:44.787629 containerd[1464]: time="2024-11-12T20:48:44.787599523Z" level=info msg="StopPodSandbox for \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\" returns successfully" Nov 12 20:48:44.788507 containerd[1464]: time="2024-11-12T20:48:44.788283134Z" level=info msg="RemovePodSandbox for \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\"" Nov 12 20:48:44.788507 containerd[1464]: time="2024-11-12T20:48:44.788326333Z" level=info msg="Forcibly stopping sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\"" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.839 [WARNING][5083] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"21429838-1e78-4092-ae72-36d1f86e0ea6", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"7974b06cab7b15daa2569d6c9212cf6caa26bfb05a488136d43be0ed111a7217", Pod:"calico-apiserver-65945754fc-qf8qk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7f98563a987", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.839 [INFO][5083] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.840 [INFO][5083] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" iface="eth0" netns="" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.840 [INFO][5083] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.840 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.877 [INFO][5090] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.877 [INFO][5090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.877 [INFO][5090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.887 [WARNING][5090] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.887 [INFO][5090] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" HandleID="k8s-pod-network.e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--qf8qk-eth0" Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.889 [INFO][5090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.894049 containerd[1464]: 2024-11-12 20:48:44.890 [INFO][5083] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54" Nov 12 20:48:44.894049 containerd[1464]: time="2024-11-12T20:48:44.891923796Z" level=info msg="TearDown network for sandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\" successfully" Nov 12 20:48:44.897705 containerd[1464]: time="2024-11-12T20:48:44.897645955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:48:44.897861 containerd[1464]: time="2024-11-12T20:48:44.897745930Z" level=info msg="RemovePodSandbox \"e419885c4cd398ae587bbadb674d951cb8666e56a04707e762ee2f4d27e73b54\" returns successfully" Nov 12 20:48:44.898659 containerd[1464]: time="2024-11-12T20:48:44.898341842Z" level=info msg="StopPodSandbox for \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\"" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.950 [WARNING][5108] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52481471-f033-4cb1-b92c-d14ac3414abb", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19", Pod:"calico-apiserver-65945754fc-677z5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b88afa3676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.950 [INFO][5108] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.950 [INFO][5108] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" iface="eth0" netns="" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.950 [INFO][5108] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.950 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.978 [INFO][5115] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.978 [INFO][5115] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.978 [INFO][5115] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.989 [WARNING][5115] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.989 [INFO][5115] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.991 [INFO][5115] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:44.994338 containerd[1464]: 2024-11-12 20:48:44.993 [INFO][5108] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:44.995889 containerd[1464]: time="2024-11-12T20:48:44.994489264Z" level=info msg="TearDown network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\" successfully" Nov 12 20:48:44.995889 containerd[1464]: time="2024-11-12T20:48:44.994531625Z" level=info msg="StopPodSandbox for \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\" returns successfully" Nov 12 20:48:44.995889 containerd[1464]: time="2024-11-12T20:48:44.995182182Z" level=info msg="RemovePodSandbox for \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\"" Nov 12 20:48:44.995889 containerd[1464]: time="2024-11-12T20:48:44.995220386Z" level=info msg="Forcibly stopping sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\"" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.047 [WARNING][5134] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0", GenerateName:"calico-apiserver-65945754fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"52481471-f033-4cb1-b92c-d14ac3414abb", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65945754fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"28c6f42ef274e865db3f793791b534183895da8191f6b1d0550d619cd39d1b19", Pod:"calico-apiserver-65945754fc-677z5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b88afa3676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.047 [INFO][5134] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.047 [INFO][5134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" iface="eth0" netns="" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.047 [INFO][5134] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.047 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.080 [INFO][5140] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.080 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.080 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.089 [WARNING][5140] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.089 [INFO][5140] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" HandleID="k8s-pod-network.69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-calico--apiserver--65945754fc--677z5-eth0" Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.091 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:45.093635 containerd[1464]: 2024-11-12 20:48:45.092 [INFO][5134] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc" Nov 12 20:48:45.094458 containerd[1464]: time="2024-11-12T20:48:45.093686254Z" level=info msg="TearDown network for sandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\" successfully" Nov 12 20:48:45.098319 containerd[1464]: time="2024-11-12T20:48:45.098272367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:48:45.098487 containerd[1464]: time="2024-11-12T20:48:45.098359818Z" level=info msg="RemovePodSandbox \"69b17b4704df8d1395f15dceb289274990bc6f67ea9fde2e5be263e3fb50f2dc\" returns successfully" Nov 12 20:48:45.099087 containerd[1464]: time="2024-11-12T20:48:45.099042553Z" level=info msg="StopPodSandbox for \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\"" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.155 [WARNING][5158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f64531b6-17e8-4c48-9009-e59e3b3fc041", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397", Pod:"coredns-6f6b679f8f-fzbmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43aa47464ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.155 [INFO][5158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.155 [INFO][5158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" iface="eth0" netns="" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.155 [INFO][5158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.155 [INFO][5158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.181 [INFO][5164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.182 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.182 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.194 [WARNING][5164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.194 [INFO][5164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.197 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:45.200370 containerd[1464]: 2024-11-12 20:48:45.198 [INFO][5158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.200370 containerd[1464]: time="2024-11-12T20:48:45.199932171Z" level=info msg="TearDown network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\" successfully" Nov 12 20:48:45.200370 containerd[1464]: time="2024-11-12T20:48:45.199973424Z" level=info msg="StopPodSandbox for \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\" returns successfully" Nov 12 20:48:45.202105 containerd[1464]: time="2024-11-12T20:48:45.201666695Z" level=info msg="RemovePodSandbox for \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\"" Nov 12 20:48:45.202105 containerd[1464]: time="2024-11-12T20:48:45.201729084Z" level=info msg="Forcibly stopping sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\"" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.263 [WARNING][5183] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f64531b6-17e8-4c48-9009-e59e3b3fc041", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 47, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-2-0-e402c1fd471f6bbbf36b.c.flatcar-212911.internal", ContainerID:"d538e86c612f7530ef8e4cc16a5d5bc566ba4f0838c1513288fcd00011737397", Pod:"coredns-6f6b679f8f-fzbmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43aa47464ce", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.264 [INFO][5183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.264 [INFO][5183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" iface="eth0" netns="" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.264 [INFO][5183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.264 [INFO][5183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.290 [INFO][5190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.290 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.290 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.300 [WARNING][5190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.300 [INFO][5190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" HandleID="k8s-pod-network.440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Workload="ci--4081--2--0--e402c1fd471f6bbbf36b.c.flatcar--212911.internal-k8s-coredns--6f6b679f8f--fzbmz-eth0" Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.303 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:48:45.306252 containerd[1464]: 2024-11-12 20:48:45.304 [INFO][5183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f" Nov 12 20:48:45.307152 containerd[1464]: time="2024-11-12T20:48:45.306308051Z" level=info msg="TearDown network for sandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\" successfully" Nov 12 20:48:45.311798 containerd[1464]: time="2024-11-12T20:48:45.311496744Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:48:45.311798 containerd[1464]: time="2024-11-12T20:48:45.311627352Z" level=info msg="RemovePodSandbox \"440772a11916f21d118acb29e5382dfb1bfb80c54e26929d465adb6b0386d74f\" returns successfully" Nov 12 20:48:45.649076 systemd[1]: Started sshd@8-10.128.0.68:22-139.178.89.65:41728.service - OpenSSH per-connection server daemon (139.178.89.65:41728). Nov 12 20:48:45.955029 sshd[5197]: Accepted publickey for core from 139.178.89.65 port 41728 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:48:45.955868 sshd[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:45.962761 systemd-logind[1454]: New session 9 of user core. Nov 12 20:48:45.965779 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:48:46.249437 sshd[5197]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:46.255260 systemd[1]: sshd@8-10.128.0.68:22-139.178.89.65:41728.service: Deactivated successfully. Nov 12 20:48:46.258219 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:48:46.259351 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:48:46.261151 systemd-logind[1454]: Removed session 9. Nov 12 20:48:51.307125 systemd[1]: Started sshd@9-10.128.0.68:22-139.178.89.65:47740.service - OpenSSH per-connection server daemon (139.178.89.65:47740). Nov 12 20:48:51.598029 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 47740 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:48:51.599933 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:51.606738 systemd-logind[1454]: New session 10 of user core. Nov 12 20:48:51.612142 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:48:51.893271 sshd[5212]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:51.902935 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:48:51.903855 systemd[1]: sshd@9-10.128.0.68:22-139.178.89.65:47740.service: Deactivated successfully. Nov 12 20:48:51.907024 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:48:51.908418 systemd-logind[1454]: Removed session 10. Nov 12 20:48:51.953982 systemd[1]: Started sshd@10-10.128.0.68:22-139.178.89.65:47748.service - OpenSSH per-connection server daemon (139.178.89.65:47748). Nov 12 20:48:52.244133 sshd[5229]: Accepted publickey for core from 139.178.89.65 port 47748 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:48:52.245841 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:52.252887 systemd-logind[1454]: New session 11 of user core. Nov 12 20:48:52.261899 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:48:52.584137 sshd[5229]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:52.592103 systemd[1]: sshd@10-10.128.0.68:22-139.178.89.65:47748.service: Deactivated successfully. Nov 12 20:48:52.594761 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:48:52.595892 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:48:52.597725 systemd-logind[1454]: Removed session 11. Nov 12 20:48:52.639382 systemd[1]: Started sshd@11-10.128.0.68:22-139.178.89.65:47764.service - OpenSSH per-connection server daemon (139.178.89.65:47764). Nov 12 20:48:52.937878 sshd[5239]: Accepted publickey for core from 139.178.89.65 port 47764 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:48:52.939834 sshd[5239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:52.945630 systemd-logind[1454]: New session 12 of user core. Nov 12 20:48:52.952763 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:48:53.225221 sshd[5239]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:53.231163 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:48:53.232237 systemd[1]: sshd@11-10.128.0.68:22-139.178.89.65:47764.service: Deactivated successfully. Nov 12 20:48:53.236211 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:48:53.237631 systemd-logind[1454]: Removed session 12. Nov 12 20:48:58.281578 systemd[1]: Started sshd@12-10.128.0.68:22-139.178.89.65:36966.service - OpenSSH per-connection server daemon (139.178.89.65:36966). Nov 12 20:48:58.574122 sshd[5252]: Accepted publickey for core from 139.178.89.65 port 36966 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:48:58.574993 sshd[5252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:48:58.581680 systemd-logind[1454]: New session 13 of user core. Nov 12 20:48:58.588826 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:48:58.866403 sshd[5252]: pam_unix(sshd:session): session closed for user core Nov 12 20:48:58.875790 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:48:58.877269 systemd[1]: sshd@12-10.128.0.68:22-139.178.89.65:36966.service: Deactivated successfully. Nov 12 20:48:58.883230 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:48:58.886743 systemd-logind[1454]: Removed session 13. Nov 12 20:49:03.926285 systemd[1]: Started sshd@13-10.128.0.68:22-139.178.89.65:36980.service - OpenSSH per-connection server daemon (139.178.89.65:36980). Nov 12 20:49:04.214628 sshd[5293]: Accepted publickey for core from 139.178.89.65 port 36980 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:04.216592 sshd[5293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:04.222449 systemd-logind[1454]: New session 14 of user core. Nov 12 20:49:04.228808 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:49:04.534614 sshd[5293]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:04.541014 systemd[1]: sshd@13-10.128.0.68:22-139.178.89.65:36980.service: Deactivated successfully. Nov 12 20:49:04.543958 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:49:04.545332 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:49:04.547374 systemd-logind[1454]: Removed session 14. Nov 12 20:49:09.595935 systemd[1]: Started sshd@14-10.128.0.68:22-139.178.89.65:37842.service - OpenSSH per-connection server daemon (139.178.89.65:37842). Nov 12 20:49:09.917701 sshd[5309]: Accepted publickey for core from 139.178.89.65 port 37842 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:09.922822 sshd[5309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:09.935425 systemd-logind[1454]: New session 15 of user core. Nov 12 20:49:09.942791 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:49:10.285244 sshd[5309]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:10.291310 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:49:10.293028 systemd[1]: sshd@14-10.128.0.68:22-139.178.89.65:37842.service: Deactivated successfully. Nov 12 20:49:10.298392 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:49:10.302183 systemd-logind[1454]: Removed session 15. Nov 12 20:49:10.352714 systemd[1]: Started sshd@15-10.128.0.68:22-139.178.89.65:37854.service - OpenSSH per-connection server daemon (139.178.89.65:37854). Nov 12 20:49:10.668730 sshd[5322]: Accepted publickey for core from 139.178.89.65 port 37854 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:10.670987 sshd[5322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:10.681906 systemd-logind[1454]: New session 16 of user core. Nov 12 20:49:10.689787 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:49:11.116963 sshd[5322]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:11.131488 systemd[1]: sshd@15-10.128.0.68:22-139.178.89.65:37854.service: Deactivated successfully. Nov 12 20:49:11.137668 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:49:11.139812 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:49:11.143819 systemd-logind[1454]: Removed session 16. Nov 12 20:49:11.181626 systemd[1]: Started sshd@16-10.128.0.68:22-139.178.89.65:37862.service - OpenSSH per-connection server daemon (139.178.89.65:37862). Nov 12 20:49:11.484508 sshd[5356]: Accepted publickey for core from 139.178.89.65 port 37862 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:11.486599 sshd[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:11.493959 systemd-logind[1454]: New session 17 of user core. Nov 12 20:49:11.497809 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:49:14.217741 sshd[5356]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:14.229087 systemd[1]: sshd@16-10.128.0.68:22-139.178.89.65:37862.service: Deactivated successfully. Nov 12 20:49:14.234992 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:49:14.242865 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:49:14.245974 systemd-logind[1454]: Removed session 17. Nov 12 20:49:14.279034 systemd[1]: Started sshd@17-10.128.0.68:22-139.178.89.65:37878.service - OpenSSH per-connection server daemon (139.178.89.65:37878). Nov 12 20:49:14.610596 sshd[5372]: Accepted publickey for core from 139.178.89.65 port 37878 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:14.613701 sshd[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:14.623681 systemd-logind[1454]: New session 18 of user core. Nov 12 20:49:14.629825 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:49:15.159187 sshd[5372]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:15.167981 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:49:15.169129 systemd[1]: sshd@17-10.128.0.68:22-139.178.89.65:37878.service: Deactivated successfully. Nov 12 20:49:15.174568 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:49:15.177108 systemd-logind[1454]: Removed session 18. Nov 12 20:49:15.215678 systemd[1]: Started sshd@18-10.128.0.68:22-139.178.89.65:37884.service - OpenSSH per-connection server daemon (139.178.89.65:37884). Nov 12 20:49:15.505146 sshd[5385]: Accepted publickey for core from 139.178.89.65 port 37884 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:15.506806 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:15.515038 systemd-logind[1454]: New session 19 of user core. Nov 12 20:49:15.523795 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:49:15.810351 sshd[5385]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:15.817583 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:49:15.817866 systemd[1]: sshd@18-10.128.0.68:22-139.178.89.65:37884.service: Deactivated successfully. Nov 12 20:49:15.821707 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:49:15.826460 systemd-logind[1454]: Removed session 19. Nov 12 20:49:20.871354 systemd[1]: Started sshd@19-10.128.0.68:22-139.178.89.65:54988.service - OpenSSH per-connection server daemon (139.178.89.65:54988). Nov 12 20:49:21.173050 sshd[5402]: Accepted publickey for core from 139.178.89.65 port 54988 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:21.175110 sshd[5402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:21.182363 systemd-logind[1454]: New session 20 of user core. Nov 12 20:49:21.186901 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:49:21.462542 sshd[5402]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:21.467617 systemd[1]: sshd@19-10.128.0.68:22-139.178.89.65:54988.service: Deactivated successfully. Nov 12 20:49:21.470258 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:49:21.472536 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:49:21.474362 systemd-logind[1454]: Removed session 20. Nov 12 20:49:26.521077 systemd[1]: Started sshd@20-10.128.0.68:22-139.178.89.65:55002.service - OpenSSH per-connection server daemon (139.178.89.65:55002). Nov 12 20:49:26.818388 sshd[5417]: Accepted publickey for core from 139.178.89.65 port 55002 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:26.820541 sshd[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:26.826659 systemd-logind[1454]: New session 21 of user core. Nov 12 20:49:26.832828 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:49:27.110250 sshd[5417]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:27.115160 systemd[1]: sshd@20-10.128.0.68:22-139.178.89.65:55002.service: Deactivated successfully. Nov 12 20:49:27.118332 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:49:27.120814 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:49:27.122926 systemd-logind[1454]: Removed session 21. Nov 12 20:49:32.169115 systemd[1]: Started sshd@21-10.128.0.68:22-139.178.89.65:47652.service - OpenSSH per-connection server daemon (139.178.89.65:47652). Nov 12 20:49:32.459169 sshd[5449]: Accepted publickey for core from 139.178.89.65 port 47652 ssh2: RSA SHA256:rsyo+O1Pc5Bv08gVAS2T44MZMZEPvAn9tQ0zz7o0HYs Nov 12 20:49:32.461155 sshd[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:49:32.466722 systemd-logind[1454]: New session 22 of user core. Nov 12 20:49:32.471798 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:49:32.745056 sshd[5449]: pam_unix(sshd:session): session closed for user core Nov 12 20:49:32.751152 systemd[1]: sshd@21-10.128.0.68:22-139.178.89.65:47652.service: Deactivated successfully. Nov 12 20:49:32.753970 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:49:32.755195 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:49:32.756719 systemd-logind[1454]: Removed session 22.