Aug 5 22:43:53.102367 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:43:53.102415 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:43:53.102435 kernel: BIOS-provided physical RAM map: Aug 5 22:43:53.102449 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 5 22:43:53.102461 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 5 22:43:53.102475 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 5 22:43:53.102493 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 5 22:43:53.102512 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 5 22:43:53.102526 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 5 22:43:53.102541 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 5 22:43:53.102555 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 5 22:43:53.102570 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 5 22:43:53.102584 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 5 22:43:53.102599 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 5 22:43:53.102621 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 5 22:43:53.102637 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 5 22:43:53.102653 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 5 22:43:53.102669 kernel: NX (Execute Disable) protection: active Aug 5 22:43:53.102686 kernel: APIC: Static calls initialized Aug 5 22:43:53.102701 kernel: efi: EFI v2.7 by EDK II Aug 5 22:43:53.102717 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 5 22:43:53.102733 kernel: SMBIOS 2.4 present. Aug 5 22:43:53.102749 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024 Aug 5 22:43:53.102765 kernel: Hypervisor detected: KVM Aug 5 22:43:53.102786 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:43:53.102811 kernel: kvm-clock: using sched offset of 11915991086 cycles Aug 5 22:43:53.102828 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:43:53.102844 kernel: tsc: Detected 2299.998 MHz processor Aug 5 22:43:53.102861 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:43:53.102878 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:43:53.102894 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 5 22:43:53.102910 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 5 22:43:53.102927 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:43:53.102947 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 5 22:43:53.102964 kernel: Using GB pages for direct mapping Aug 5 22:43:53.102988 kernel: Secure boot disabled Aug 5 22:43:53.103002 kernel: ACPI: Early table checksum verification disabled Aug 5 22:43:53.103016 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 5 22:43:53.103032 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 5 22:43:53.103048 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 5 22:43:53.103073 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 5 22:43:53.103094 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 5 22:43:53.103111 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Aug 5 22:43:53.103129 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 5 22:43:53.103147 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 5 22:43:53.103164 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 5 22:43:53.103182 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 5 22:43:53.103204 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 5 22:43:53.103221 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 5 22:43:53.103238 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 5 22:43:53.103278 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 5 22:43:53.103296 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 5 22:43:53.103313 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 5 22:43:53.103329 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 5 22:43:53.103347 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 5 22:43:53.103364 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 5 22:43:53.103387 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 5 22:43:53.103404 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:43:53.103422 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:43:53.103439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 5 22:43:53.103456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 5 22:43:53.103473 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 5 22:43:53.103490 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 5 22:43:53.103508 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 5 22:43:53.103525 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Aug 5 22:43:53.103548 kernel: Zone ranges: Aug 5 22:43:53.103564 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:43:53.103581 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 5 22:43:53.103598 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 5 22:43:53.103615 kernel: Movable zone start for each node Aug 5 22:43:53.103632 kernel: Early memory node ranges Aug 5 22:43:53.103648 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 5 22:43:53.103665 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 5 22:43:53.103682 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 5 22:43:53.103703 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 5 22:43:53.103720 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 5 22:43:53.103737 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 5 22:43:53.103753 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:43:53.103780 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 5 22:43:53.103798 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 5 22:43:53.103816 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 5 22:43:53.103834 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 5 22:43:53.103852 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:43:53.103870 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:43:53.103893 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:43:53.103911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:43:53.103929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:43:53.103947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:43:53.103964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:43:53.103990 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:43:53.104008 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:43:53.104026 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 5 22:43:53.104048 kernel: Booting paravirtualized kernel on KVM Aug 5 22:43:53.104067 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:43:53.104085 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:43:53.104103 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:43:53.104122 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:43:53.104139 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:43:53.104166 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:43:53.104183 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:43:53.104201 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:43:53.104222 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:43:53.104238 kernel: random: crng init done Aug 5 22:43:53.104267 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 5 22:43:53.104292 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:43:53.104306 kernel: Fallback order for Node 0: 0 Aug 5 22:43:53.104321 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 5 22:43:53.104335 kernel: Policy zone: Normal Aug 5 22:43:53.104350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:43:53.104364 kernel: software IO TLB: area num 2. Aug 5 22:43:53.104507 kernel: Memory: 7509672K/7860584K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 350652K reserved, 0K cma-reserved) Aug 5 22:43:53.104525 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:43:53.104542 kernel: Kernel/User page tables isolation: enabled Aug 5 22:43:53.104558 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:43:53.104575 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:43:53.104590 kernel: Dynamic Preempt: voluntary Aug 5 22:43:53.104607 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:43:53.104626 kernel: rcu: RCU event tracing is enabled. Aug 5 22:43:53.104770 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:43:53.104790 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:43:53.104808 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:43:53.104831 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:43:53.104849 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:43:53.104867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:43:53.105004 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:43:53.105022 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:43:53.105039 kernel: Console: colour dummy device 80x25 Aug 5 22:43:53.105061 kernel: printk: console [ttyS0] enabled Aug 5 22:43:53.105079 kernel: ACPI: Core revision 20230628 Aug 5 22:43:53.105229 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:43:53.105247 kernel: x2apic enabled Aug 5 22:43:53.105297 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:43:53.105445 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 5 22:43:53.105463 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 5 22:43:53.105481 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 5 22:43:53.105506 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 5 22:43:53.105524 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 5 22:43:53.105647 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:43:53.105666 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 5 22:43:53.105684 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 5 22:43:53.105702 kernel: Spectre V2 : Mitigation: IBRS Aug 5 22:43:53.105720 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:43:53.105738 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:43:53.105756 kernel: RETBleed: Mitigation: IBRS Aug 5 22:43:53.105780 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 5 22:43:53.105798 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 5 22:43:53.105816 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 5 22:43:53.105835 kernel: MDS: Mitigation: Clear CPU buffers Aug 5 22:43:53.105853 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:43:53.105871 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:43:53.105889 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:43:53.105908 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:43:53.105926 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:43:53.105949 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 5 22:43:53.105967 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:43:53.105994 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:43:53.106013 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:43:53.106031 kernel: SELinux: Initializing. Aug 5 22:43:53.106050 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.106069 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.106088 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 5 22:43:53.106106 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:43:53.106129 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:43:53.106148 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:43:53.106166 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 5 22:43:53.106185 kernel: signal: max sigframe size: 1776 Aug 5 22:43:53.106203 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:43:53.106223 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:43:53.106241 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:43:53.106275 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:43:53.106292 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:43:53.106313 kernel: .... node #0, CPUs: #1 Aug 5 22:43:53.106332 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 5 22:43:53.106352 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:43:53.106371 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:43:53.106389 kernel: smpboot: Max logical packages: 1 Aug 5 22:43:53.106415 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 5 22:43:53.106434 kernel: devtmpfs: initialized Aug 5 22:43:53.106452 kernel: x86/mm: Memory block size: 128MB Aug 5 22:43:53.106476 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 5 22:43:53.106495 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:43:53.106514 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:43:53.106533 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:43:53.106551 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:43:53.106568 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:43:53.106584 kernel: audit: type=2000 audit(1722897832.209:1): state=initialized audit_enabled=0 res=1 Aug 5 22:43:53.106601 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:43:53.106618 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:43:53.106640 kernel: cpuidle: using governor menu Aug 5 22:43:53.106659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:43:53.106675 kernel: dca service started, version 1.12.1 Aug 5 22:43:53.106694 kernel: PCI: Using configuration type 1 for base access Aug 5 22:43:53.106711 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:43:53.106730 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:43:53.106748 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:43:53.106767 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:43:53.106784 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:43:53.106807 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:43:53.106825 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:43:53.106864 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:43:53.106883 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:43:53.106900 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 5 22:43:53.106918 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:43:53.106934 kernel: ACPI: Interpreter enabled Aug 5 22:43:53.106952 kernel: ACPI: PM: (supports S0 S3 S5) Aug 5 22:43:53.106976 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:43:53.107000 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:43:53.107018 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 5 22:43:53.107036 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 5 22:43:53.107054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:43:53.107337 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:43:53.107545 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:43:53.107737 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:43:53.107768 kernel: PCI host bridge to bus 0000:00 Aug 5 22:43:53.107961 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:43:53.108150 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:43:53.110398 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:43:53.110585 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 5 22:43:53.110754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:43:53.110977 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:43:53.111191 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 5 22:43:53.111449 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 5 22:43:53.111641 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:43:53.111836 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 5 22:43:53.112032 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 5 22:43:53.112220 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 5 22:43:53.112456 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 5 22:43:53.112645 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 5 22:43:53.112831 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 5 22:43:53.113034 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:43:53.113230 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 5 22:43:53.113994 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 5 22:43:53.114026 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:43:53.114175 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:43:53.114196 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:43:53.114217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:43:53.114234 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:43:53.114430 kernel: iommu: Default domain type: Translated Aug 5 22:43:53.114452 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:43:53.114471 kernel: efivars: Registered efivars operations Aug 5 22:43:53.114610 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:43:53.114631 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:43:53.114655 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 5 22:43:53.114675 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 5 22:43:53.114693 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 5 22:43:53.114712 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 5 22:43:53.114730 kernel: vgaarb: loaded Aug 5 22:43:53.114750 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:43:53.114770 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:43:53.114789 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:43:53.114808 kernel: pnp: PnP ACPI init Aug 5 22:43:53.114832 kernel: pnp: PnP ACPI: found 7 devices Aug 5 22:43:53.114851 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:43:53.114870 kernel: NET: Registered PF_INET protocol family Aug 5 22:43:53.114889 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:43:53.114908 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 5 22:43:53.114928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:43:53.114947 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:43:53.114966 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 5 22:43:53.114992 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 5 22:43:53.115015 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.115035 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.115054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:43:53.115073 kernel: NET: Registered PF_XDP protocol family Aug 5 22:43:53.115283 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:43:53.115460 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:43:53.115627 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:43:53.115793 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 5 22:43:53.115996 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:43:53.116023 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:43:53.116043 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 5 22:43:53.116062 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 5 22:43:53.116082 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:43:53.116100 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 5 22:43:53.116119 kernel: clocksource: Switched to clocksource tsc Aug 5 22:43:53.116137 kernel: Initialise system trusted keyrings Aug 5 22:43:53.116161 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 5 22:43:53.116180 kernel: Key type asymmetric registered Aug 5 22:43:53.116198 kernel: Asymmetric key parser 'x509' registered Aug 5 22:43:53.116216 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:43:53.116234 kernel: io scheduler mq-deadline registered Aug 5 22:43:53.116314 kernel: io scheduler kyber registered Aug 5 22:43:53.116333 kernel: io scheduler bfq registered Aug 5 22:43:53.116348 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:43:53.116367 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 5 22:43:53.116575 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 5 22:43:53.116598 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 5 22:43:53.116773 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 5 22:43:53.116794 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 5 22:43:53.116987 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 5 22:43:53.117010 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:43:53.117029 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:43:53.117047 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 5 22:43:53.117065 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 5 22:43:53.117090 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 5 22:43:53.118980 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 5 22:43:53.119017 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:43:53.119038 kernel: i8042: Warning: Keylock active Aug 5 22:43:53.119058 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:43:53.119077 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:43:53.119311 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 5 22:43:53.119517 kernel: rtc_cmos 00:00: registered as rtc0 Aug 5 22:43:53.119698 kernel: rtc_cmos 00:00: setting system clock to 2024-08-05T22:43:52 UTC (1722897832) Aug 5 22:43:53.119865 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 5 22:43:53.119888 kernel: intel_pstate: CPU model not supported Aug 5 22:43:53.119909 kernel: pstore: Using crash dump compression: deflate Aug 5 22:43:53.119929 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:43:53.119948 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:43:53.119976 kernel: Segment Routing with IPv6 Aug 5 22:43:53.119995 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:43:53.120017 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:43:53.120033 kernel: Key type dns_resolver registered Aug 5 22:43:53.120050 kernel: IPI shorthand broadcast: enabled Aug 5 22:43:53.120068 kernel: sched_clock: Marking stable (879016968, 160116642)->(1074369563, -35235953) Aug 5 22:43:53.120087 kernel: registered taskstats version 1 Aug 5 22:43:53.120107 kernel: Loading compiled-in X.509 certificates Aug 5 22:43:53.120127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:43:53.120146 kernel: Key type .fscrypt registered Aug 5 22:43:53.120164 kernel: Key type fscrypt-provisioning registered Aug 5 22:43:53.120185 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:43:53.120201 kernel: ima: No architecture policies found Aug 5 22:43:53.120219 kernel: clk: Disabling unused clocks Aug 5 22:43:53.120236 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:43:53.120496 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:43:53.120522 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:43:53.120541 kernel: Run /init as init process Aug 5 22:43:53.120560 kernel: with arguments: Aug 5 22:43:53.120579 kernel: /init Aug 5 22:43:53.120739 kernel: with environment: Aug 5 22:43:53.120759 kernel: HOME=/ Aug 5 22:43:53.120778 kernel: TERM=linux Aug 5 22:43:53.120796 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:43:53.120815 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:43:53.120966 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:43:53.121000 systemd[1]: Detected virtualization google. Aug 5 22:43:53.121025 systemd[1]: Detected architecture x86-64. Aug 5 22:43:53.121045 systemd[1]: Running in initrd. Aug 5 22:43:53.121186 systemd[1]: No hostname configured, using default hostname. Aug 5 22:43:53.121205 systemd[1]: Hostname set to . Aug 5 22:43:53.121225 systemd[1]: Initializing machine ID from random generator. Aug 5 22:43:53.121245 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:43:53.121463 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:43:53.121603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:43:53.121631 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:43:53.121652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:43:53.121673 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:43:53.121844 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:43:53.121870 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:43:53.121891 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:43:53.121912 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:43:53.121938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:43:53.121959 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:43:53.122006 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:43:53.122027 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:43:53.122045 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:43:53.122066 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:43:53.122091 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:43:53.122113 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:43:53.122134 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:43:53.122156 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:43:53.122177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:43:53.122199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:43:53.122220 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:43:53.122240 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:43:53.122274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:43:53.122298 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:43:53.122318 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:43:53.122336 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:43:53.122354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:43:53.122413 systemd-journald[183]: Collecting audit messages is disabled. Aug 5 22:43:53.122463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:43:53.122485 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:43:53.122505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:43:53.122527 systemd-journald[183]: Journal started Aug 5 22:43:53.122572 systemd-journald[183]: Runtime Journal (/run/log/journal/ff542b20f9be4efa9011f8d7e84ccbd6) is 8.0M, max 148.7M, 140.7M free. Aug 5 22:43:53.128150 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:43:53.127951 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:43:53.140324 systemd-modules-load[184]: Inserted module 'overlay' Aug 5 22:43:53.141629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:43:53.154491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:43:53.163390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:43:53.169777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:43:53.183231 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:43:53.190481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:43:53.197282 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:43:53.208308 kernel: Bridge firewalling registered Aug 5 22:43:53.208977 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 5 22:43:53.211551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:43:53.212820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:43:53.223481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:43:53.228155 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:43:53.235533 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:43:53.239427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:43:53.253594 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:43:53.263503 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:43:53.277470 dracut-cmdline[215]: dracut-dracut-053 Aug 5 22:43:53.282298 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:43:53.323243 systemd-resolved[218]: Positive Trust Anchors: Aug 5 22:43:53.323791 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:43:53.323866 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:43:53.331291 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 5 22:43:53.334830 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:43:53.347036 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:43:53.393301 kernel: SCSI subsystem initialized Aug 5 22:43:53.406306 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:43:53.421312 kernel: iscsi: registered transport (tcp) Aug 5 22:43:53.449490 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:43:53.449579 kernel: QLogic iSCSI HBA Driver Aug 5 22:43:53.503237 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:43:53.516501 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:43:53.561316 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:43:53.561414 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:43:53.561440 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:43:53.612321 kernel: raid6: avx2x4 gen() 18219 MB/s Aug 5 22:43:53.629336 kernel: raid6: avx2x2 gen() 18030 MB/s Aug 5 22:43:53.647049 kernel: raid6: avx2x1 gen() 13491 MB/s Aug 5 22:43:53.647151 kernel: raid6: using algorithm avx2x4 gen() 18219 MB/s Aug 5 22:43:53.665721 kernel: raid6: .... xor() 7512 MB/s, rmw enabled Aug 5 22:43:53.665807 kernel: raid6: using avx2x2 recovery algorithm Aug 5 22:43:53.698300 kernel: xor: automatically using best checksumming function avx Aug 5 22:43:53.899308 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:43:53.912537 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:43:53.922523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:43:53.953912 systemd-udevd[401]: Using default interface naming scheme 'v255'. Aug 5 22:43:53.961218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:43:53.973480 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:43:54.003198 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 5 22:43:54.042769 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:43:54.058569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:43:54.140250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:43:54.157514 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:43:54.193617 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:43:54.206492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:43:54.215418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:43:54.224813 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:43:54.239237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:43:54.254296 kernel: scsi host0: Virtio SCSI HBA Aug 5 22:43:54.261281 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 5 22:43:54.283353 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:43:54.297500 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:43:54.380983 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:43:54.381202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:43:54.385664 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:43:54.405949 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:43:54.405995 kernel: AES CTR mode by8 optimization enabled Aug 5 22:43:54.397520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:43:54.397787 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:43:54.399972 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:43:54.428640 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 5 22:43:54.446704 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 5 22:43:54.446989 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 5 22:43:54.447223 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 5 22:43:54.447471 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 5 22:43:54.447697 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:43:54.447725 kernel: GPT:17805311 != 25165823 Aug 5 22:43:54.447750 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:43:54.447785 kernel: GPT:17805311 != 25165823 Aug 5 22:43:54.447809 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:43:54.447833 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:43:54.447863 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 5 22:43:54.429456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:43:54.473125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:43:54.500853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:43:54.511327 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Aug 5 22:43:54.514277 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (459) Aug 5 22:43:54.556239 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 5 22:43:54.557059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:43:54.570915 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 5 22:43:54.577338 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 5 22:43:54.581452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 5 22:43:54.594532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 5 22:43:54.599526 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:43:54.641241 disk-uuid[551]: Primary Header is updated. Aug 5 22:43:54.641241 disk-uuid[551]: Secondary Entries is updated. Aug 5 22:43:54.641241 disk-uuid[551]: Secondary Header is updated. Aug 5 22:43:54.650491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:43:55.680285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:43:55.680375 disk-uuid[552]: The operation has completed successfully. Aug 5 22:43:55.757437 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:43:55.757590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:43:55.783541 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:43:55.818066 sh[569]: Success Aug 5 22:43:55.846373 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:43:55.940949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:43:55.948780 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:43:55.963020 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:43:56.022390 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:43:56.022487 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:56.022533 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:43:56.031924 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:43:56.038795 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:43:56.072474 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:43:56.073484 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:43:56.077494 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:43:56.117601 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:43:56.163114 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:56.163208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:56.163233 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:43:56.176298 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:43:56.193060 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:43:56.210533 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:56.223368 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:43:56.248594 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:43:56.309322 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:43:56.328597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:43:56.440538 systemd-networkd[752]: lo: Link UP Aug 5 22:43:56.440555 systemd-networkd[752]: lo: Gained carrier Aug 5 22:43:56.448650 ignition[696]: Ignition 2.19.0 Aug 5 22:43:56.443107 systemd-networkd[752]: Enumeration completed Aug 5 22:43:56.448660 ignition[696]: Stage: fetch-offline Aug 5 22:43:56.443741 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:43:56.448729 ignition[696]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.443751 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:43:56.448746 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.446003 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:43:56.448882 ignition[696]: parsed url from cmdline: "" Aug 5 22:43:56.446416 systemd-networkd[752]: eth0: Link UP Aug 5 22:43:56.448887 ignition[696]: no config URL provided Aug 5 22:43:56.446425 systemd-networkd[752]: eth0: Gained carrier Aug 5 22:43:56.448897 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:43:56.446440 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:43:56.448907 ignition[696]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:43:56.449705 systemd[1]: Reached target network.target - Network. Aug 5 22:43:56.448917 ignition[696]: failed to fetch config: resource requires networking Aug 5 22:43:56.467425 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.28/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 5 22:43:56.449197 ignition[696]: Ignition finished successfully Aug 5 22:43:56.475745 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:43:56.532605 ignition[762]: Ignition 2.19.0 Aug 5 22:43:56.498572 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:43:56.532615 ignition[762]: Stage: fetch Aug 5 22:43:56.551764 unknown[762]: fetched base config from "system" Aug 5 22:43:56.532846 ignition[762]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.551778 unknown[762]: fetched base config from "system" Aug 5 22:43:56.532859 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.551789 unknown[762]: fetched user config from "gcp" Aug 5 22:43:56.532975 ignition[762]: parsed url from cmdline: "" Aug 5 22:43:56.554956 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:43:56.532982 ignition[762]: no config URL provided Aug 5 22:43:56.580526 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:43:56.532991 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:43:56.625317 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:43:56.533001 ignition[762]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:43:56.657546 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:43:56.533027 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 5 22:43:56.700927 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:43:56.539157 ignition[762]: GET result: OK Aug 5 22:43:56.707855 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:43:56.539331 ignition[762]: parsing config with SHA512: e11e29f76bef411c514e2a63317d54ebb07912edd1df5d98e4a8513efb412f47739225b14666835ecdc6ed23097ebd8fc93a3b5ce9955d05858d8196014ba34b Aug 5 22:43:56.738476 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:43:56.552960 ignition[762]: fetch: fetch complete Aug 5 22:43:56.753460 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:43:56.552974 ignition[762]: fetch: fetch passed Aug 5 22:43:56.770480 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:43:56.553055 ignition[762]: Ignition finished successfully Aug 5 22:43:56.784483 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:43:56.622721 ignition[769]: Ignition 2.19.0 Aug 5 22:43:56.805565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:43:56.622732 ignition[769]: Stage: kargs Aug 5 22:43:56.622967 ignition[769]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.622981 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.624043 ignition[769]: kargs: kargs passed Aug 5 22:43:56.624101 ignition[769]: Ignition finished successfully Aug 5 22:43:56.683030 ignition[776]: Ignition 2.19.0 Aug 5 22:43:56.683043 ignition[776]: Stage: disks Aug 5 22:43:56.683537 ignition[776]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.683559 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.685250 ignition[776]: disks: disks passed Aug 5 22:43:56.685354 ignition[776]: Ignition finished successfully Aug 5 22:43:56.856983 systemd-fsck[785]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 5 22:43:57.042437 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:43:57.076460 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:43:57.209303 kernel: EXT4-fs (sda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:43:57.210303 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:43:57.211289 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:43:57.245446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:43:57.260429 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:43:57.280042 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:43:57.341758 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (793) Aug 5 22:43:57.341804 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:57.341821 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:57.341836 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:43:57.341851 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:43:57.280139 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:43:57.280181 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:43:57.322151 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:43:57.352224 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:43:57.384539 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:43:57.528053 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:43:57.539454 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:43:57.549433 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:43:57.559416 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:43:57.710134 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:43:57.741490 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:43:57.770506 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:57.765676 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:43:57.788867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:43:57.816109 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:43:57.826416 ignition[906]: INFO : Ignition 2.19.0 Aug 5 22:43:57.826416 ignition[906]: INFO : Stage: mount Aug 5 22:43:57.857466 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:57.857466 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:57.857466 ignition[906]: INFO : mount: mount passed Aug 5 22:43:57.857466 ignition[906]: INFO : Ignition finished successfully Aug 5 22:43:57.834971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:43:57.849472 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:43:58.216593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:43:58.263291 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (918) Aug 5 22:43:58.281354 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:58.281441 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:58.281465 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:43:58.299322 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:43:58.302157 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:43:58.342010 ignition[935]: INFO : Ignition 2.19.0 Aug 5 22:43:58.342010 ignition[935]: INFO : Stage: files Aug 5 22:43:58.358428 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:58.358428 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:58.358428 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:43:58.358428 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:43:58.358428 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:43:58.352013 unknown[935]: wrote ssh authorized keys file for user: core Aug 5 22:43:58.495483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:43:58.421460 systemd-networkd[752]: eth0: Gained IPv6LL Aug 5 22:43:58.603010 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Aug 5 22:43:58.882954 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:43:59.416762 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:59.416762 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:43:59.455469 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:43:59.455469 ignition[935]: INFO : files: files passed Aug 5 22:43:59.455469 ignition[935]: INFO : Ignition finished successfully Aug 5 22:43:59.421453 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:43:59.441658 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:43:59.487512 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:43:59.498028 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:43:59.680586 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:43:59.680586 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:43:59.498156 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:43:59.741487 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:43:59.544921 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:43:59.568767 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:43:59.599520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:43:59.684081 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:43:59.684218 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:43:59.696926 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:43:59.731599 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:43:59.751649 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:43:59.758672 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:43:59.811979 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:43:59.837638 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:43:59.859999 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:43:59.880768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:43:59.904842 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:43:59.915834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:43:59.916040 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:43:59.973760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:43:59.982877 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:44:00.007831 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:44:00.018830 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:44:00.047755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:44:00.055817 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:44:00.091767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:44:00.101866 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:44:00.119948 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:44:00.140027 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:44:00.164698 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:44:00.164923 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:44:00.191862 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:44:00.210751 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:44:00.231771 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:44:00.231974 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:44:00.241859 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:44:00.242078 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:44:00.298635 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:44:00.298914 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:44:00.308902 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:44:00.309090 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:44:00.346610 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:44:00.376489 ignition[988]: INFO : Ignition 2.19.0 Aug 5 22:44:00.376489 ignition[988]: INFO : Stage: umount Aug 5 22:44:00.376489 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:44:00.376489 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:44:00.376489 ignition[988]: INFO : umount: umount passed Aug 5 22:44:00.376489 ignition[988]: INFO : Ignition finished successfully Aug 5 22:44:00.384543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:44:00.384843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:44:00.415746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:44:00.465040 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:44:00.465357 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:44:00.480942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:44:00.481132 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:44:00.541051 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:44:00.542282 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:44:00.542418 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:44:00.550235 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:44:00.550465 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:44:00.569094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:44:00.569224 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:44:00.601986 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:44:00.602051 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:44:00.607752 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:44:00.607829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:44:00.624709 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:44:00.624802 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:44:00.652664 systemd[1]: Stopped target network.target - Network. Aug 5 22:44:00.669608 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:44:00.669714 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:44:00.689661 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:44:00.697646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:44:00.699385 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:44:00.712707 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:44:00.747609 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:44:00.756759 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:44:00.756820 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:44:00.773785 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:44:00.773852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:44:00.801603 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:44:00.801707 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:44:00.819738 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:44:00.819828 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:44:00.839620 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:44:00.839740 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:44:00.852948 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:44:00.858353 systemd-networkd[752]: eth0: DHCPv6 lease lost Aug 5 22:44:00.879711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:44:00.897972 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:44:00.898112 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:44:00.917109 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:44:00.917566 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:44:00.935421 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:44:00.935483 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:44:00.947553 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:44:00.976610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:44:00.976697 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:44:00.991763 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:44:00.991846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:44:01.021691 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:44:01.021778 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:44:01.041684 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:44:01.041765 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:44:01.071809 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:44:01.097077 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:44:01.097250 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:44:01.112940 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:44:01.113012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:44:01.164653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:44:01.164721 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:44:01.184507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:44:01.184725 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:44:01.211784 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:44:01.211882 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:44:01.272564 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:44:01.272689 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:44:01.321565 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:44:01.336674 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:44:01.336764 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:44:01.366732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:44:01.366823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:44:01.559518 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 5 22:44:01.390174 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:44:01.390324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:44:01.409887 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:44:01.410008 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:44:01.431823 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:44:01.458791 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:44:01.505312 systemd[1]: Switching root. Aug 5 22:44:01.626485 systemd-journald[183]: Journal stopped Aug 5 22:43:53.102367 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:43:53.102415 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:43:53.102435 kernel: BIOS-provided physical RAM map: Aug 5 22:43:53.102449 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Aug 5 22:43:53.102461 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Aug 5 22:43:53.102475 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Aug 5 22:43:53.102493 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Aug 5 22:43:53.102512 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Aug 5 22:43:53.102526 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Aug 5 22:43:53.102541 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Aug 5 22:43:53.102555 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Aug 5 22:43:53.102570 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Aug 5 22:43:53.102584 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Aug 5 22:43:53.102599 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Aug 5 22:43:53.102621 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Aug 5 22:43:53.102637 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Aug 5 22:43:53.102653 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Aug 5 22:43:53.102669 kernel: NX (Execute Disable) protection: active Aug 5 22:43:53.102686 kernel: APIC: Static calls initialized Aug 5 22:43:53.102701 kernel: efi: EFI v2.7 by EDK II Aug 5 22:43:53.102717 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Aug 5 22:43:53.102733 kernel: SMBIOS 2.4 present. Aug 5 22:43:53.102749 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024 Aug 5 22:43:53.102765 kernel: Hypervisor detected: KVM Aug 5 22:43:53.102786 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:43:53.102811 kernel: kvm-clock: using sched offset of 11915991086 cycles Aug 5 22:43:53.102828 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:43:53.102844 kernel: tsc: Detected 2299.998 MHz processor Aug 5 22:43:53.102861 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:43:53.102878 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:43:53.102894 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Aug 5 22:43:53.102910 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Aug 5 22:43:53.102927 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:43:53.102947 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Aug 5 22:43:53.102964 kernel: Using GB pages for direct mapping Aug 5 22:43:53.102988 kernel: Secure boot disabled Aug 5 22:43:53.103002 kernel: ACPI: Early table checksum verification disabled Aug 5 22:43:53.103016 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Aug 5 22:43:53.103032 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Aug 5 22:43:53.103048 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Aug 5 22:43:53.103073 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Aug 5 22:43:53.103094 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Aug 5 22:43:53.103111 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Aug 5 22:43:53.103129 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Aug 5 22:43:53.103147 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Aug 5 22:43:53.103164 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Aug 5 22:43:53.103182 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Aug 5 22:43:53.103204 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Aug 5 22:43:53.103221 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Aug 5 22:43:53.103238 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Aug 5 22:43:53.103278 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Aug 5 22:43:53.103296 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Aug 5 22:43:53.103313 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Aug 5 22:43:53.103329 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Aug 5 22:43:53.103347 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Aug 5 22:43:53.103364 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Aug 5 22:43:53.103387 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Aug 5 22:43:53.103404 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:43:53.103422 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:43:53.103439 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 5 22:43:53.103456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Aug 5 22:43:53.103473 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Aug 5 22:43:53.103490 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Aug 5 22:43:53.103508 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Aug 5 22:43:53.103525 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Aug 5 22:43:53.103548 kernel: Zone ranges: Aug 5 22:43:53.103564 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:43:53.103581 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 5 22:43:53.103598 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Aug 5 22:43:53.103615 kernel: Movable zone start for each node Aug 5 22:43:53.103632 kernel: Early memory node ranges Aug 5 22:43:53.103648 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Aug 5 22:43:53.103665 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Aug 5 22:43:53.103682 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Aug 5 22:43:53.103703 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Aug 5 22:43:53.103720 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Aug 5 22:43:53.103737 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Aug 5 22:43:53.103753 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:43:53.103780 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Aug 5 22:43:53.103798 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Aug 5 22:43:53.103816 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Aug 5 22:43:53.103834 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Aug 5 22:43:53.103852 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:43:53.103870 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:43:53.103893 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:43:53.103911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:43:53.103929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:43:53.103947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:43:53.103964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:43:53.103990 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:43:53.104008 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:43:53.104026 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Aug 5 22:43:53.104048 kernel: Booting paravirtualized kernel on KVM Aug 5 22:43:53.104067 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:43:53.104085 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:43:53.104103 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:43:53.104122 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:43:53.104139 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:43:53.104166 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:43:53.104183 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:43:53.104201 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:43:53.104222 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:43:53.104238 kernel: random: crng init done Aug 5 22:43:53.104267 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 5 22:43:53.104292 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:43:53.104306 kernel: Fallback order for Node 0: 0 Aug 5 22:43:53.104321 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Aug 5 22:43:53.104335 kernel: Policy zone: Normal Aug 5 22:43:53.104350 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:43:53.104364 kernel: software IO TLB: area num 2. Aug 5 22:43:53.104507 kernel: Memory: 7509672K/7860584K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 350652K reserved, 0K cma-reserved) Aug 5 22:43:53.104525 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:43:53.104542 kernel: Kernel/User page tables isolation: enabled Aug 5 22:43:53.104558 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:43:53.104575 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:43:53.104590 kernel: Dynamic Preempt: voluntary Aug 5 22:43:53.104607 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:43:53.104626 kernel: rcu: RCU event tracing is enabled. Aug 5 22:43:53.104770 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:43:53.104790 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:43:53.104808 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:43:53.104831 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:43:53.104849 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:43:53.104867 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:43:53.105004 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:43:53.105022 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:43:53.105039 kernel: Console: colour dummy device 80x25 Aug 5 22:43:53.105061 kernel: printk: console [ttyS0] enabled Aug 5 22:43:53.105079 kernel: ACPI: Core revision 20230628 Aug 5 22:43:53.105229 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:43:53.105247 kernel: x2apic enabled Aug 5 22:43:53.105297 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:43:53.105445 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Aug 5 22:43:53.105463 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 5 22:43:53.105481 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Aug 5 22:43:53.105506 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Aug 5 22:43:53.105524 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Aug 5 22:43:53.105647 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:43:53.105666 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 5 22:43:53.105684 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 5 22:43:53.105702 kernel: Spectre V2 : Mitigation: IBRS Aug 5 22:43:53.105720 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:43:53.105738 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:43:53.105756 kernel: RETBleed: Mitigation: IBRS Aug 5 22:43:53.105780 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 5 22:43:53.105798 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Aug 5 22:43:53.105816 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 5 22:43:53.105835 kernel: MDS: Mitigation: Clear CPU buffers Aug 5 22:43:53.105853 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:43:53.105871 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:43:53.105889 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:43:53.105908 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:43:53.105926 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:43:53.105949 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 5 22:43:53.105967 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:43:53.105994 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:43:53.106013 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:43:53.106031 kernel: SELinux: Initializing. Aug 5 22:43:53.106050 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.106069 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.106088 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Aug 5 22:43:53.106106 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:43:53.106129 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:43:53.106148 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:43:53.106166 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Aug 5 22:43:53.106185 kernel: signal: max sigframe size: 1776 Aug 5 22:43:53.106203 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:43:53.106223 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:43:53.106241 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:43:53.106275 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:43:53.106292 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:43:53.106313 kernel: .... node #0, CPUs: #1 Aug 5 22:43:53.106332 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 5 22:43:53.106352 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:43:53.106371 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:43:53.106389 kernel: smpboot: Max logical packages: 1 Aug 5 22:43:53.106415 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Aug 5 22:43:53.106434 kernel: devtmpfs: initialized Aug 5 22:43:53.106452 kernel: x86/mm: Memory block size: 128MB Aug 5 22:43:53.106476 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Aug 5 22:43:53.106495 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:43:53.106514 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:43:53.106533 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:43:53.106551 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:43:53.106568 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:43:53.106584 kernel: audit: type=2000 audit(1722897832.209:1): state=initialized audit_enabled=0 res=1 Aug 5 22:43:53.106601 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:43:53.106618 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:43:53.106640 kernel: cpuidle: using governor menu Aug 5 22:43:53.106659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:43:53.106675 kernel: dca service started, version 1.12.1 Aug 5 22:43:53.106694 kernel: PCI: Using configuration type 1 for base access Aug 5 22:43:53.106711 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:43:53.106730 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:43:53.106748 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:43:53.106767 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:43:53.106784 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:43:53.106807 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:43:53.106825 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:43:53.106864 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:43:53.106883 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:43:53.106900 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 5 22:43:53.106918 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:43:53.106934 kernel: ACPI: Interpreter enabled Aug 5 22:43:53.106952 kernel: ACPI: PM: (supports S0 S3 S5) Aug 5 22:43:53.106976 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:43:53.107000 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:43:53.107018 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 5 22:43:53.107036 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 5 22:43:53.107054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:43:53.107337 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:43:53.107545 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:43:53.107737 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:43:53.107768 kernel: PCI host bridge to bus 0000:00 Aug 5 22:43:53.107961 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:43:53.108150 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:43:53.110398 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:43:53.110585 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Aug 5 22:43:53.110754 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:43:53.110977 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:43:53.111191 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Aug 5 22:43:53.111449 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Aug 5 22:43:53.111641 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:43:53.111836 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Aug 5 22:43:53.112032 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Aug 5 22:43:53.112220 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Aug 5 22:43:53.112456 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Aug 5 22:43:53.112645 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Aug 5 22:43:53.112831 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Aug 5 22:43:53.113034 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Aug 5 22:43:53.113230 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Aug 5 22:43:53.113994 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Aug 5 22:43:53.114026 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:43:53.114175 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:43:53.114196 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:43:53.114217 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:43:53.114234 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:43:53.114430 kernel: iommu: Default domain type: Translated Aug 5 22:43:53.114452 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:43:53.114471 kernel: efivars: Registered efivars operations Aug 5 22:43:53.114610 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:43:53.114631 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:43:53.114655 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Aug 5 22:43:53.114675 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Aug 5 22:43:53.114693 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Aug 5 22:43:53.114712 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Aug 5 22:43:53.114730 kernel: vgaarb: loaded Aug 5 22:43:53.114750 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:43:53.114770 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:43:53.114789 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:43:53.114808 kernel: pnp: PnP ACPI init Aug 5 22:43:53.114832 kernel: pnp: PnP ACPI: found 7 devices Aug 5 22:43:53.114851 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:43:53.114870 kernel: NET: Registered PF_INET protocol family Aug 5 22:43:53.114889 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:43:53.114908 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 5 22:43:53.114928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:43:53.114947 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:43:53.114966 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 5 22:43:53.114992 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 5 22:43:53.115015 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.115035 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:43:53.115054 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:43:53.115073 kernel: NET: Registered PF_XDP protocol family Aug 5 22:43:53.115283 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:43:53.115460 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:43:53.115627 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:43:53.115793 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Aug 5 22:43:53.115996 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:43:53.116023 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:43:53.116043 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 5 22:43:53.116062 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Aug 5 22:43:53.116082 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:43:53.116100 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Aug 5 22:43:53.116119 kernel: clocksource: Switched to clocksource tsc Aug 5 22:43:53.116137 kernel: Initialise system trusted keyrings Aug 5 22:43:53.116161 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 5 22:43:53.116180 kernel: Key type asymmetric registered Aug 5 22:43:53.116198 kernel: Asymmetric key parser 'x509' registered Aug 5 22:43:53.116216 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:43:53.116234 kernel: io scheduler mq-deadline registered Aug 5 22:43:53.116314 kernel: io scheduler kyber registered Aug 5 22:43:53.116333 kernel: io scheduler bfq registered Aug 5 22:43:53.116348 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:43:53.116367 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Aug 5 22:43:53.116575 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Aug 5 22:43:53.116598 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Aug 5 22:43:53.116773 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Aug 5 22:43:53.116794 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Aug 5 22:43:53.116987 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Aug 5 22:43:53.117010 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:43:53.117029 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:43:53.117047 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 5 22:43:53.117065 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Aug 5 22:43:53.117090 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Aug 5 22:43:53.118980 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Aug 5 22:43:53.119017 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:43:53.119038 kernel: i8042: Warning: Keylock active Aug 5 22:43:53.119058 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:43:53.119077 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:43:53.119311 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 5 22:43:53.119517 kernel: rtc_cmos 00:00: registered as rtc0 Aug 5 22:43:53.119698 kernel: rtc_cmos 00:00: setting system clock to 2024-08-05T22:43:52 UTC (1722897832) Aug 5 22:43:53.119865 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 5 22:43:53.119888 kernel: intel_pstate: CPU model not supported Aug 5 22:43:53.119909 kernel: pstore: Using crash dump compression: deflate Aug 5 22:43:53.119929 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:43:53.119948 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:43:53.119976 kernel: Segment Routing with IPv6 Aug 5 22:43:53.119995 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:43:53.120017 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:43:53.120033 kernel: Key type dns_resolver registered Aug 5 22:43:53.120050 kernel: IPI shorthand broadcast: enabled Aug 5 22:43:53.120068 kernel: sched_clock: Marking stable (879016968, 160116642)->(1074369563, -35235953) Aug 5 22:43:53.120087 kernel: registered taskstats version 1 Aug 5 22:43:53.120107 kernel: Loading compiled-in X.509 certificates Aug 5 22:43:53.120127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:43:53.120146 kernel: Key type .fscrypt registered Aug 5 22:43:53.120164 kernel: Key type fscrypt-provisioning registered Aug 5 22:43:53.120185 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:43:53.120201 kernel: ima: No architecture policies found Aug 5 22:43:53.120219 kernel: clk: Disabling unused clocks Aug 5 22:43:53.120236 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:43:53.120496 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:43:53.120522 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:43:53.120541 kernel: Run /init as init process Aug 5 22:43:53.120560 kernel: with arguments: Aug 5 22:43:53.120579 kernel: /init Aug 5 22:43:53.120739 kernel: with environment: Aug 5 22:43:53.120759 kernel: HOME=/ Aug 5 22:43:53.120778 kernel: TERM=linux Aug 5 22:43:53.120796 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:43:53.120815 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:43:53.120966 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:43:53.121000 systemd[1]: Detected virtualization google. Aug 5 22:43:53.121025 systemd[1]: Detected architecture x86-64. Aug 5 22:43:53.121045 systemd[1]: Running in initrd. Aug 5 22:43:53.121186 systemd[1]: No hostname configured, using default hostname. Aug 5 22:43:53.121205 systemd[1]: Hostname set to . Aug 5 22:43:53.121225 systemd[1]: Initializing machine ID from random generator. Aug 5 22:43:53.121245 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:43:53.121463 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:43:53.121603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:43:53.121631 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:43:53.121652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:43:53.121673 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:43:53.121844 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:43:53.121870 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:43:53.121891 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:43:53.121912 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:43:53.121938 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:43:53.121959 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:43:53.122006 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:43:53.122027 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:43:53.122045 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:43:53.122066 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:43:53.122091 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:43:53.122113 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:43:53.122134 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:43:53.122156 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:43:53.122177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:43:53.122199 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:43:53.122220 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:43:53.122240 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:43:53.122274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:43:53.122298 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:43:53.122318 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:43:53.122336 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:43:53.122354 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:43:53.122413 systemd-journald[183]: Collecting audit messages is disabled. Aug 5 22:43:53.122463 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:43:53.122485 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:43:53.122505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:43:53.122527 systemd-journald[183]: Journal started Aug 5 22:43:53.122572 systemd-journald[183]: Runtime Journal (/run/log/journal/ff542b20f9be4efa9011f8d7e84ccbd6) is 8.0M, max 148.7M, 140.7M free. Aug 5 22:43:53.128150 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:43:53.127951 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:43:53.140324 systemd-modules-load[184]: Inserted module 'overlay' Aug 5 22:43:53.141629 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:43:53.154491 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:43:53.163390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:43:53.169777 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:43:53.183231 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:43:53.190481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:43:53.197282 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:43:53.208308 kernel: Bridge firewalling registered Aug 5 22:43:53.208977 systemd-modules-load[184]: Inserted module 'br_netfilter' Aug 5 22:43:53.211551 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:43:53.212820 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:43:53.223481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:43:53.228155 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:43:53.235533 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:43:53.239427 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:43:53.253594 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:43:53.263503 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:43:53.277470 dracut-cmdline[215]: dracut-dracut-053 Aug 5 22:43:53.282298 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:43:53.323243 systemd-resolved[218]: Positive Trust Anchors: Aug 5 22:43:53.323791 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:43:53.323866 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:43:53.331291 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 5 22:43:53.334830 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:43:53.347036 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:43:53.393301 kernel: SCSI subsystem initialized Aug 5 22:43:53.406306 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:43:53.421312 kernel: iscsi: registered transport (tcp) Aug 5 22:43:53.449490 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:43:53.449579 kernel: QLogic iSCSI HBA Driver Aug 5 22:43:53.503237 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:43:53.516501 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:43:53.561316 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:43:53.561414 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:43:53.561440 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:43:53.612321 kernel: raid6: avx2x4 gen() 18219 MB/s Aug 5 22:43:53.629336 kernel: raid6: avx2x2 gen() 18030 MB/s Aug 5 22:43:53.647049 kernel: raid6: avx2x1 gen() 13491 MB/s Aug 5 22:43:53.647151 kernel: raid6: using algorithm avx2x4 gen() 18219 MB/s Aug 5 22:43:53.665721 kernel: raid6: .... xor() 7512 MB/s, rmw enabled Aug 5 22:43:53.665807 kernel: raid6: using avx2x2 recovery algorithm Aug 5 22:43:53.698300 kernel: xor: automatically using best checksumming function avx Aug 5 22:43:53.899308 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:43:53.912537 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:43:53.922523 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:43:53.953912 systemd-udevd[401]: Using default interface naming scheme 'v255'. Aug 5 22:43:53.961218 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:43:53.973480 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:43:54.003198 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 5 22:43:54.042769 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:43:54.058569 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:43:54.140250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:43:54.157514 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:43:54.193617 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:43:54.206492 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:43:54.215418 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:43:54.224813 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:43:54.239237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:43:54.254296 kernel: scsi host0: Virtio SCSI HBA Aug 5 22:43:54.261281 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Aug 5 22:43:54.283353 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:43:54.297500 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:43:54.380983 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:43:54.381202 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:43:54.385664 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:43:54.405949 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:43:54.405995 kernel: AES CTR mode by8 optimization enabled Aug 5 22:43:54.397520 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:43:54.397787 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:43:54.399972 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:43:54.428640 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Aug 5 22:43:54.446704 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Aug 5 22:43:54.446989 kernel: sd 0:0:1:0: [sda] Write Protect is off Aug 5 22:43:54.447223 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Aug 5 22:43:54.447471 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 5 22:43:54.447697 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:43:54.447725 kernel: GPT:17805311 != 25165823 Aug 5 22:43:54.447750 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:43:54.447785 kernel: GPT:17805311 != 25165823 Aug 5 22:43:54.447809 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:43:54.447833 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:43:54.447863 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Aug 5 22:43:54.429456 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:43:54.473125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:43:54.500853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:43:54.511327 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Aug 5 22:43:54.514277 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (459) Aug 5 22:43:54.556239 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Aug 5 22:43:54.557059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:43:54.570915 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Aug 5 22:43:54.577338 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Aug 5 22:43:54.581452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Aug 5 22:43:54.594532 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 5 22:43:54.599526 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:43:54.641241 disk-uuid[551]: Primary Header is updated. Aug 5 22:43:54.641241 disk-uuid[551]: Secondary Entries is updated. Aug 5 22:43:54.641241 disk-uuid[551]: Secondary Header is updated. Aug 5 22:43:54.650491 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:43:55.680285 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:43:55.680375 disk-uuid[552]: The operation has completed successfully. Aug 5 22:43:55.757437 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:43:55.757590 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:43:55.783541 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:43:55.818066 sh[569]: Success Aug 5 22:43:55.846373 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:43:55.940949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:43:55.948780 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:43:55.963020 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:43:56.022390 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:43:56.022487 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:56.022533 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:43:56.031924 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:43:56.038795 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:43:56.072474 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:43:56.073484 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:43:56.077494 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:43:56.117601 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:43:56.163114 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:56.163208 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:56.163233 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:43:56.176298 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:43:56.193060 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:43:56.210533 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:56.223368 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:43:56.248594 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:43:56.309322 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:43:56.328597 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:43:56.440538 systemd-networkd[752]: lo: Link UP Aug 5 22:43:56.440555 systemd-networkd[752]: lo: Gained carrier Aug 5 22:43:56.448650 ignition[696]: Ignition 2.19.0 Aug 5 22:43:56.443107 systemd-networkd[752]: Enumeration completed Aug 5 22:43:56.448660 ignition[696]: Stage: fetch-offline Aug 5 22:43:56.443741 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:43:56.448729 ignition[696]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.443751 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:43:56.448746 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.446003 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:43:56.448882 ignition[696]: parsed url from cmdline: "" Aug 5 22:43:56.446416 systemd-networkd[752]: eth0: Link UP Aug 5 22:43:56.448887 ignition[696]: no config URL provided Aug 5 22:43:56.446425 systemd-networkd[752]: eth0: Gained carrier Aug 5 22:43:56.448897 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:43:56.446440 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:43:56.448907 ignition[696]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:43:56.449705 systemd[1]: Reached target network.target - Network. Aug 5 22:43:56.448917 ignition[696]: failed to fetch config: resource requires networking Aug 5 22:43:56.467425 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.28/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 5 22:43:56.449197 ignition[696]: Ignition finished successfully Aug 5 22:43:56.475745 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:43:56.532605 ignition[762]: Ignition 2.19.0 Aug 5 22:43:56.498572 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:43:56.532615 ignition[762]: Stage: fetch Aug 5 22:43:56.551764 unknown[762]: fetched base config from "system" Aug 5 22:43:56.532846 ignition[762]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.551778 unknown[762]: fetched base config from "system" Aug 5 22:43:56.532859 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.551789 unknown[762]: fetched user config from "gcp" Aug 5 22:43:56.532975 ignition[762]: parsed url from cmdline: "" Aug 5 22:43:56.554956 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:43:56.532982 ignition[762]: no config URL provided Aug 5 22:43:56.580526 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:43:56.532991 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:43:56.625317 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:43:56.533001 ignition[762]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:43:56.657546 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:43:56.533027 ignition[762]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Aug 5 22:43:56.700927 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:43:56.539157 ignition[762]: GET result: OK Aug 5 22:43:56.707855 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:43:56.539331 ignition[762]: parsing config with SHA512: e11e29f76bef411c514e2a63317d54ebb07912edd1df5d98e4a8513efb412f47739225b14666835ecdc6ed23097ebd8fc93a3b5ce9955d05858d8196014ba34b Aug 5 22:43:56.738476 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:43:56.552960 ignition[762]: fetch: fetch complete Aug 5 22:43:56.753460 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:43:56.552974 ignition[762]: fetch: fetch passed Aug 5 22:43:56.770480 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:43:56.553055 ignition[762]: Ignition finished successfully Aug 5 22:43:56.784483 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:43:56.622721 ignition[769]: Ignition 2.19.0 Aug 5 22:43:56.805565 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:43:56.622732 ignition[769]: Stage: kargs Aug 5 22:43:56.622967 ignition[769]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.622981 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.624043 ignition[769]: kargs: kargs passed Aug 5 22:43:56.624101 ignition[769]: Ignition finished successfully Aug 5 22:43:56.683030 ignition[776]: Ignition 2.19.0 Aug 5 22:43:56.683043 ignition[776]: Stage: disks Aug 5 22:43:56.683537 ignition[776]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:56.683559 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:56.685250 ignition[776]: disks: disks passed Aug 5 22:43:56.685354 ignition[776]: Ignition finished successfully Aug 5 22:43:56.856983 systemd-fsck[785]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 5 22:43:57.042437 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:43:57.076460 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:43:57.209303 kernel: EXT4-fs (sda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:43:57.210303 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:43:57.211289 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:43:57.245446 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:43:57.260429 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:43:57.280042 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:43:57.341758 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (793) Aug 5 22:43:57.341804 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:57.341821 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:57.341836 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:43:57.341851 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:43:57.280139 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:43:57.280181 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:43:57.322151 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:43:57.352224 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:43:57.384539 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:43:57.528053 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:43:57.539454 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:43:57.549433 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:43:57.559416 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:43:57.710134 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:43:57.741490 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:43:57.770506 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:57.765676 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:43:57.788867 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:43:57.816109 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:43:57.826416 ignition[906]: INFO : Ignition 2.19.0 Aug 5 22:43:57.826416 ignition[906]: INFO : Stage: mount Aug 5 22:43:57.857466 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:57.857466 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:57.857466 ignition[906]: INFO : mount: mount passed Aug 5 22:43:57.857466 ignition[906]: INFO : Ignition finished successfully Aug 5 22:43:57.834971 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:43:57.849472 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:43:58.216593 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:43:58.263291 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (918) Aug 5 22:43:58.281354 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:43:58.281441 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:43:58.281465 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:43:58.299322 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:43:58.302157 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:43:58.342010 ignition[935]: INFO : Ignition 2.19.0 Aug 5 22:43:58.342010 ignition[935]: INFO : Stage: files Aug 5 22:43:58.358428 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:43:58.358428 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:43:58.358428 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:43:58.358428 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:43:58.358428 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:43:58.358428 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:43:58.352013 unknown[935]: wrote ssh authorized keys file for user: core Aug 5 22:43:58.495483 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:43:58.421460 systemd-networkd[752]: eth0: Gained IPv6LL Aug 5 22:43:58.603010 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:58.620452 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Aug 5 22:43:58.882954 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:43:59.416762 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:43:59.416762 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:43:59.455469 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:43:59.455469 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:43:59.455469 ignition[935]: INFO : files: files passed Aug 5 22:43:59.455469 ignition[935]: INFO : Ignition finished successfully Aug 5 22:43:59.421453 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:43:59.441658 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:43:59.487512 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:43:59.498028 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:43:59.680586 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:43:59.680586 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:43:59.498156 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:43:59.741487 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:43:59.544921 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:43:59.568767 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:43:59.599520 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:43:59.684081 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:43:59.684218 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:43:59.696926 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:43:59.731599 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:43:59.751649 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:43:59.758672 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:43:59.811979 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:43:59.837638 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:43:59.859999 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:43:59.880768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:43:59.904842 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:43:59.915834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:43:59.916040 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:43:59.973760 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:43:59.982877 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:44:00.007831 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:44:00.018830 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:44:00.047755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:44:00.055817 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:44:00.091767 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:44:00.101866 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:44:00.119948 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:44:00.140027 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:44:00.164698 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:44:00.164923 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:44:00.191862 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:44:00.210751 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:44:00.231771 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:44:00.231974 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:44:00.241859 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:44:00.242078 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:44:00.298635 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:44:00.298914 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:44:00.308902 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:44:00.309090 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:44:00.346610 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:44:00.376489 ignition[988]: INFO : Ignition 2.19.0 Aug 5 22:44:00.376489 ignition[988]: INFO : Stage: umount Aug 5 22:44:00.376489 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:44:00.376489 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Aug 5 22:44:00.376489 ignition[988]: INFO : umount: umount passed Aug 5 22:44:00.376489 ignition[988]: INFO : Ignition finished successfully Aug 5 22:44:00.384543 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:44:00.384843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:44:00.415746 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:44:00.465040 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:44:00.465357 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:44:00.480942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:44:00.481132 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:44:00.541051 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:44:00.542282 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:44:00.542418 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:44:00.550235 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:44:00.550465 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:44:00.569094 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:44:00.569224 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:44:00.601986 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:44:00.602051 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:44:00.607752 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:44:00.607829 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:44:00.624709 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:44:00.624802 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:44:00.652664 systemd[1]: Stopped target network.target - Network. Aug 5 22:44:00.669608 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:44:00.669714 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:44:00.689661 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:44:00.697646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:44:00.699385 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:44:00.712707 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:44:00.747609 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:44:00.756759 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:44:00.756820 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:44:00.773785 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:44:00.773852 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:44:00.801603 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:44:00.801707 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:44:00.819738 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:44:00.819828 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:44:00.839620 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:44:00.839740 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:44:00.852948 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:44:00.858353 systemd-networkd[752]: eth0: DHCPv6 lease lost Aug 5 22:44:00.879711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:44:00.897972 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:44:00.898112 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:44:00.917109 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:44:00.917566 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:44:00.935421 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:44:00.935483 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:44:00.947553 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:44:00.976610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:44:00.976697 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:44:00.991763 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:44:00.991846 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:44:01.021691 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:44:01.021778 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:44:01.041684 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:44:01.041765 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:44:01.071809 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:44:01.097077 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:44:01.097250 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:44:01.112940 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:44:01.113012 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:44:01.164653 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:44:01.164721 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:44:01.184507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:44:01.184725 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:44:01.211784 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:44:01.211882 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:44:01.272564 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:44:01.272689 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:44:01.321565 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:44:01.336674 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:44:01.336764 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:44:01.366732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:44:01.366823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:44:01.559518 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Aug 5 22:44:01.390174 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:44:01.390324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:44:01.409887 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:44:01.410008 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:44:01.431823 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:44:01.458791 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:44:01.505312 systemd[1]: Switching root. Aug 5 22:44:01.626485 systemd-journald[183]: Journal stopped Aug 5 22:44:04.265738 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:44:04.265802 kernel: SELinux: policy capability open_perms=1 Aug 5 22:44:04.265825 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:44:04.265844 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:44:04.265861 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:44:04.265878 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:44:04.265900 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:44:04.265923 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:44:04.265942 kernel: audit: type=1403 audit(1722897842.153:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:44:04.265982 systemd[1]: Successfully loaded SELinux policy in 89.847ms. Aug 5 22:44:04.266006 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.425ms. Aug 5 22:44:04.266030 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:44:04.266050 systemd[1]: Detected virtualization google. Aug 5 22:44:04.266071 systemd[1]: Detected architecture x86-64. Aug 5 22:44:04.266100 systemd[1]: Detected first boot. Aug 5 22:44:04.266125 systemd[1]: Initializing machine ID from random generator. Aug 5 22:44:04.266148 zram_generator::config[1029]: No configuration found. Aug 5 22:44:04.266175 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:44:04.266196 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:44:04.266231 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:44:04.266284 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:44:04.266309 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:44:04.266330 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:44:04.266352 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:44:04.266376 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:44:04.266399 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:44:04.266427 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:44:04.266449 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:44:04.266468 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:44:04.266487 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:44:04.266507 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:44:04.266525 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:44:04.266545 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:44:04.266566 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:44:04.266593 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:44:04.266614 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:44:04.266634 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:44:04.266653 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:44:04.266674 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:44:04.266695 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:44:04.266722 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:44:04.266742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:44:04.266763 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:44:04.266787 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:44:04.266809 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:44:04.266831 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:44:04.266854 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:44:04.266877 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:44:04.266899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:44:04.266921 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:44:04.266952 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:44:04.266975 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:44:04.266999 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:44:04.267022 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:44:04.267045 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:44:04.267073 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:44:04.267096 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:44:04.267119 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:44:04.267144 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:44:04.267167 systemd[1]: Reached target machines.target - Containers. Aug 5 22:44:04.267191 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:44:04.267226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:44:04.267249 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:44:04.269832 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:44:04.269867 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:44:04.269890 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:44:04.269912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:44:04.269934 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:44:04.269956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:44:04.269980 kernel: fuse: init (API version 7.39) Aug 5 22:44:04.270003 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:44:04.270029 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:44:04.270052 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:44:04.270073 kernel: loop: module loaded Aug 5 22:44:04.270094 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:44:04.270117 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:44:04.270139 kernel: ACPI: bus type drm_connector registered Aug 5 22:44:04.270159 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:44:04.270181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:44:04.270272 systemd-journald[1115]: Collecting audit messages is disabled. Aug 5 22:44:04.270338 systemd-journald[1115]: Journal started Aug 5 22:44:04.270381 systemd-journald[1115]: Runtime Journal (/run/log/journal/e4ed42b630764544b3f14e564c63cc3a) is 8.0M, max 148.7M, 140.7M free. Aug 5 22:44:04.272394 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:44:03.055209 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:44:03.082178 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 5 22:44:03.082759 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:44:04.309314 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:44:04.324317 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:44:04.353366 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:44:04.353498 systemd[1]: Stopped verity-setup.service. Aug 5 22:44:04.379287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:44:04.389318 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:44:04.399859 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:44:04.410758 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:44:04.420708 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:44:04.431722 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:44:04.443746 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:44:04.453670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:44:04.463852 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:44:04.475855 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:44:04.487839 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:44:04.488084 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:44:04.499823 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:44:04.500056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:44:04.511828 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:44:04.512068 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:44:04.522869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:44:04.523110 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:44:04.535839 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:44:04.536080 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:44:04.546866 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:44:04.547163 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:44:04.557882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:44:04.567820 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:44:04.579862 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:44:04.591829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:44:04.619118 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:44:04.635444 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:44:04.660396 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:44:04.670520 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:44:04.670787 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:44:04.682210 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:44:04.704602 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:44:04.725539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:44:04.736601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:44:04.745616 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:44:04.764590 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:44:04.773799 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:44:04.780958 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:44:04.790824 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:44:04.804579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:44:04.821213 systemd-journald[1115]: Time spent on flushing to /var/log/journal/e4ed42b630764544b3f14e564c63cc3a is 76.262ms for 921 entries. Aug 5 22:44:04.821213 systemd-journald[1115]: System Journal (/var/log/journal/e4ed42b630764544b3f14e564c63cc3a) is 8.0M, max 584.8M, 576.8M free. Aug 5 22:44:04.951323 systemd-journald[1115]: Received client request to flush runtime journal. Aug 5 22:44:04.952531 kernel: loop0: detected capacity change from 0 to 80568 Aug 5 22:44:04.952616 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:44:04.821340 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:44:04.847564 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:44:04.863555 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:44:04.893499 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:44:04.904936 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:44:04.927108 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:44:04.939025 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:44:04.950890 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:44:04.971968 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:44:04.990551 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:44:05.019318 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:44:05.020530 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:44:05.032511 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:44:05.059582 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:44:05.061514 kernel: loop1: detected capacity change from 0 to 210664 Aug 5 22:44:05.060874 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:44:05.084904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:44:05.094697 udevadm[1149]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:44:05.158124 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Aug 5 22:44:05.159435 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Aug 5 22:44:05.174751 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:44:05.194311 kernel: loop2: detected capacity change from 0 to 89576 Aug 5 22:44:05.292493 kernel: loop3: detected capacity change from 0 to 139760 Aug 5 22:44:05.363450 kernel: loop4: detected capacity change from 0 to 80568 Aug 5 22:44:05.429361 kernel: loop5: detected capacity change from 0 to 210664 Aug 5 22:44:05.475599 kernel: loop6: detected capacity change from 0 to 89576 Aug 5 22:44:05.522872 kernel: loop7: detected capacity change from 0 to 139760 Aug 5 22:44:05.587064 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Aug 5 22:44:05.589937 (sd-merge)[1170]: Merged extensions into '/usr'. Aug 5 22:44:05.597470 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:44:05.597494 systemd[1]: Reloading... Aug 5 22:44:05.754318 zram_generator::config[1191]: No configuration found. Aug 5 22:44:06.011120 ldconfig[1141]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:44:06.035948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:44:06.139729 systemd[1]: Reloading finished in 541 ms. Aug 5 22:44:06.171433 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:44:06.182125 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:44:06.205674 systemd[1]: Starting ensure-sysext.service... Aug 5 22:44:06.222745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:44:06.242367 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:44:06.242396 systemd[1]: Reloading... Aug 5 22:44:06.306789 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:44:06.308702 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:44:06.313368 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:44:06.314199 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 5 22:44:06.314484 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Aug 5 22:44:06.324002 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:44:06.325321 systemd-tmpfiles[1235]: Skipping /boot Aug 5 22:44:06.350883 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:44:06.351110 systemd-tmpfiles[1235]: Skipping /boot Aug 5 22:44:06.400296 zram_generator::config[1265]: No configuration found. Aug 5 22:44:06.529657 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:44:06.595722 systemd[1]: Reloading finished in 352 ms. Aug 5 22:44:06.617232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:44:06.637023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:44:06.661698 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:44:06.680898 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:44:06.700866 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:44:06.720891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:44:06.738801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:44:06.755771 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:44:06.772145 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:44:06.772766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:44:06.784506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:44:06.803690 augenrules[1323]: No rules Aug 5 22:44:06.802662 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:44:06.821702 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:44:06.831609 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:44:06.840079 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Aug 5 22:44:06.841426 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:44:06.851391 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:44:06.855116 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:44:06.867650 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:44:06.879238 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:44:06.879497 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:44:06.891787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:44:06.892640 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:44:06.905552 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:44:06.905869 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:44:06.915981 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:44:06.928078 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:44:06.940716 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:44:06.978897 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:44:07.023334 systemd[1]: Finished ensure-sysext.service. Aug 5 22:44:07.039970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:44:07.040860 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:44:07.063606 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1341) Aug 5 22:44:07.063571 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:44:07.085620 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:44:07.106524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:44:07.127554 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:44:07.141503 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 22:44:07.148580 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:44:07.161826 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:44:07.170454 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:44:07.184505 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:44:07.194421 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:44:07.194700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:44:07.195851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:44:07.197315 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:44:07.209552 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:44:07.210288 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:44:07.222426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:44:07.222666 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:44:07.233914 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:44:07.234517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:44:07.279607 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:44:07.281166 systemd-resolved[1315]: Positive Trust Anchors: Aug 5 22:44:07.282307 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:44:07.282388 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:44:07.289702 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1332) Aug 5 22:44:07.298082 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 22:44:07.304906 systemd-resolved[1315]: Defaulting to hostname 'linux'. Aug 5 22:44:07.313297 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:44:07.331313 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 5 22:44:07.334534 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Aug 5 22:44:07.344435 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:44:07.344551 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:44:07.344785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:44:07.377152 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:44:07.445301 kernel: EDAC MC: Ver: 3.0.0 Aug 5 22:44:07.447519 systemd-networkd[1372]: lo: Link UP Aug 5 22:44:07.448218 systemd-networkd[1372]: lo: Gained carrier Aug 5 22:44:07.452284 systemd-networkd[1372]: Enumeration completed Aug 5 22:44:07.452451 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:44:07.454419 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:44:07.456558 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:44:07.456573 systemd-networkd[1372]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:44:07.459459 systemd-networkd[1372]: eth0: Link UP Aug 5 22:44:07.460329 systemd-networkd[1372]: eth0: Gained carrier Aug 5 22:44:07.460514 systemd-networkd[1372]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:44:07.470095 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 5 22:44:07.472994 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Aug 5 22:44:07.475369 systemd-networkd[1372]: eth0: DHCPv4 address 10.128.0.28/32, gateway 10.128.0.1 acquired from 169.254.169.254 Aug 5 22:44:07.484399 kernel: ACPI: button: Sleep Button [SLPF] Aug 5 22:44:07.497032 systemd[1]: Reached target network.target - Network. Aug 5 22:44:07.510567 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:44:07.524806 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Aug 5 22:44:07.533212 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Aug 5 22:44:07.548652 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:44:07.588173 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Aug 5 22:44:07.598798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:44:07.609288 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:44:07.623684 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:44:07.636189 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:44:07.656290 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:44:07.677765 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:44:07.711664 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:44:07.712191 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:44:07.715649 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:44:07.736643 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:44:07.753426 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:44:07.765778 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:44:07.775796 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:44:07.787536 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:44:07.798700 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:44:07.808684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:44:07.820497 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:44:07.831440 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:44:07.831509 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:44:07.840466 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:44:07.851388 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:44:07.863281 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:44:07.876131 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:44:07.887526 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:44:07.898797 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:44:07.909471 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:44:07.919493 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:44:07.928564 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:44:07.928623 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:44:07.934446 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:44:07.957528 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:44:07.979425 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:44:08.016098 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:44:08.024179 jq[1423]: false Aug 5 22:44:08.035529 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:44:08.045449 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:44:08.054553 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:44:08.056686 coreos-metadata[1421]: Aug 05 22:44:08.056 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Aug 5 22:44:08.061018 coreos-metadata[1421]: Aug 05 22:44:08.059 INFO Fetch successful Aug 5 22:44:08.061018 coreos-metadata[1421]: Aug 05 22:44:08.059 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Aug 5 22:44:08.063107 coreos-metadata[1421]: Aug 05 22:44:08.061 INFO Fetch successful Aug 5 22:44:08.063310 coreos-metadata[1421]: Aug 05 22:44:08.063 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Aug 5 22:44:08.065557 coreos-metadata[1421]: Aug 05 22:44:08.065 INFO Fetch successful Aug 5 22:44:08.065557 coreos-metadata[1421]: Aug 05 22:44:08.065 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Aug 5 22:44:08.068046 coreos-metadata[1421]: Aug 05 22:44:08.066 INFO Fetch successful Aug 5 22:44:08.074489 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 22:44:08.090420 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:44:08.109375 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:44:08.120475 extend-filesystems[1426]: Found loop4 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found loop5 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found loop6 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found loop7 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda1 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda2 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda3 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found usr Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda4 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda6 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda7 Aug 5 22:44:08.120475 extend-filesystems[1426]: Found sda9 Aug 5 22:44:08.120475 extend-filesystems[1426]: Checking size of /dev/sda9 Aug 5 22:44:08.300683 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Aug 5 22:44:08.300750 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Aug 5 22:44:08.300800 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1346) Aug 5 22:44:08.133576 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:44:08.145320 dbus-daemon[1422]: [system] SELinux support is enabled Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:33 UTC 2024 (1): Starting Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: ---------------------------------------------------- Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: corporation. Support and training for ntp-4 are Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: available at https://www.nwtime.org/support Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: ---------------------------------------------------- Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: proto: precision = 0.108 usec (-23) Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: basedate set to 2024-07-24 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: gps base set to 2024-07-28 (week 2325) Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Listen normally on 3 eth0 10.128.0.28:123 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Listen normally on 4 lo [::1]:123 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:1c%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1c%2#123 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:1c%2 Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:44:08.301681 ntpd[1428]: 5 Aug 22:44:08 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:44:08.304394 extend-filesystems[1426]: Resized partition /dev/sda9 Aug 5 22:44:08.200522 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:44:08.150547 dbus-daemon[1422]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1372 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 22:44:08.335849 extend-filesystems[1444]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:44:08.335849 extend-filesystems[1444]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 5 22:44:08.335849 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 2 Aug 5 22:44:08.335849 extend-filesystems[1444]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Aug 5 22:44:08.387806 update_engine[1451]: I0805 22:44:08.315141 1451 main.cc:92] Flatcar Update Engine starting Aug 5 22:44:08.387806 update_engine[1451]: I0805 22:44:08.320343 1451 update_check_scheduler.cc:74] Next update check in 8m21s Aug 5 22:44:08.255036 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Aug 5 22:44:08.153119 ntpd[1428]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:33 UTC 2024 (1): Starting Aug 5 22:44:08.389057 extend-filesystems[1426]: Resized filesystem in /dev/sda9 Aug 5 22:44:08.255641 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:44:08.153153 ntpd[1428]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:44:08.260509 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:44:08.153177 ntpd[1428]: ---------------------------------------------------- Aug 5 22:44:08.410210 jq[1452]: true Aug 5 22:44:08.271415 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:44:08.153192 ntpd[1428]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:44:08.313541 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:44:08.153207 ntpd[1428]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:44:08.343985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:44:08.153224 ntpd[1428]: corporation. Support and training for ntp-4 are Aug 5 22:44:08.345437 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:44:08.153239 ntpd[1428]: available at https://www.nwtime.org/support Aug 5 22:44:08.346075 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:44:08.153277 ntpd[1428]: ---------------------------------------------------- Aug 5 22:44:08.347527 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:44:08.167660 ntpd[1428]: proto: precision = 0.108 usec (-23) Aug 5 22:44:08.366187 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:44:08.168968 ntpd[1428]: basedate set to 2024-07-24 Aug 5 22:44:08.366511 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:44:08.168997 ntpd[1428]: gps base set to 2024-07-28 (week 2325) Aug 5 22:44:08.404862 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:44:08.171584 ntpd[1428]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:44:08.405839 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:44:08.171651 ntpd[1428]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:44:08.171910 ntpd[1428]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:44:08.171961 ntpd[1428]: Listen normally on 3 eth0 10.128.0.28:123 Aug 5 22:44:08.172024 ntpd[1428]: Listen normally on 4 lo [::1]:123 Aug 5 22:44:08.172080 ntpd[1428]: bind(21) AF_INET6 fe80::4001:aff:fe80:1c%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:44:08.172106 ntpd[1428]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:1c%2#123 Aug 5 22:44:08.172128 ntpd[1428]: failed to init interface for address fe80::4001:aff:fe80:1c%2 Aug 5 22:44:08.172177 ntpd[1428]: Listening on routing socket on fd #21 for interface updates Aug 5 22:44:08.173699 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:44:08.173732 ntpd[1428]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:44:08.452650 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:44:08.453386 systemd-logind[1449]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 5 22:44:08.460026 jq[1459]: true Aug 5 22:44:08.453435 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:44:08.454489 systemd-logind[1449]: New seat seat0. Aug 5 22:44:08.456928 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:44:08.476663 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:44:08.487077 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:44:08.534837 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 22:44:08.566635 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:44:08.584283 tar[1458]: linux-amd64/helm Aug 5 22:44:08.596416 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:44:08.608717 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:44:08.609039 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:44:08.609703 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:44:08.632669 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 22:44:08.641502 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:44:08.641785 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:44:08.663700 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:44:08.680444 bash[1490]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:44:08.686363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:44:08.710997 systemd[1]: Starting sshkeys.service... Aug 5 22:44:08.728455 systemd-networkd[1372]: eth0: Gained IPv6LL Aug 5 22:44:08.743195 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:44:08.756208 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:44:08.775350 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:44:08.777409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:08.796577 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:44:08.814730 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Aug 5 22:44:08.867126 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 22:44:08.884799 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 22:44:08.908840 init.sh[1502]: + '[' -e /etc/default/instance_configs.cfg.template ']' Aug 5 22:44:08.909307 init.sh[1502]: + echo -e '[InstanceSetup]\nset_host_keys = false' Aug 5 22:44:08.909307 init.sh[1502]: + /usr/bin/google_instance_setup Aug 5 22:44:08.916368 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 22:44:08.919482 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 22:44:08.921826 dbus-daemon[1422]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1491 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 22:44:08.944560 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 22:44:08.990872 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:44:09.009711 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:44:09.026484 systemd[1]: Started sshd@0-10.128.0.28:22-139.178.68.195:40228.service - OpenSSH per-connection server daemon (139.178.68.195:40228). Aug 5 22:44:09.109483 polkitd[1513]: Started polkitd version 121 Aug 5 22:44:09.126757 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:44:09.142998 coreos-metadata[1507]: Aug 05 22:44:09.142 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Aug 5 22:44:09.151200 coreos-metadata[1507]: Aug 05 22:44:09.151 INFO Fetch failed with 404: resource not found Aug 5 22:44:09.151200 coreos-metadata[1507]: Aug 05 22:44:09.151 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Aug 5 22:44:09.157580 coreos-metadata[1507]: Aug 05 22:44:09.156 INFO Fetch successful Aug 5 22:44:09.157580 coreos-metadata[1507]: Aug 05 22:44:09.157 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Aug 5 22:44:09.158115 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:44:09.159382 coreos-metadata[1507]: Aug 05 22:44:09.159 INFO Fetch failed with 404: resource not found Aug 5 22:44:09.159382 coreos-metadata[1507]: Aug 05 22:44:09.159 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Aug 5 22:44:09.158905 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:44:09.164208 coreos-metadata[1507]: Aug 05 22:44:09.161 INFO Fetch failed with 404: resource not found Aug 5 22:44:09.164208 coreos-metadata[1507]: Aug 05 22:44:09.163 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Aug 5 22:44:09.164208 coreos-metadata[1507]: Aug 05 22:44:09.164 INFO Fetch successful Aug 5 22:44:09.171185 polkitd[1513]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 22:44:09.171322 polkitd[1513]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 22:44:09.182848 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:44:09.183218 unknown[1507]: wrote ssh authorized keys file for user: core Aug 5 22:44:09.187646 polkitd[1513]: Finished loading, compiling and executing 2 rules Aug 5 22:44:09.195191 dbus-daemon[1422]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 22:44:09.195453 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 22:44:09.198797 polkitd[1513]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 22:44:09.296642 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:44:09.297902 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 22:44:09.299001 systemd-hostnamed[1491]: Hostname set to (transient) Aug 5 22:44:09.300971 systemd-resolved[1315]: System hostname changed to 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal'. Aug 5 22:44:09.311607 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:44:09.314247 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:44:09.323712 systemd[1]: Finished sshkeys.service. Aug 5 22:44:09.352862 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:44:09.369767 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:44:09.380730 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:44:09.576634 containerd[1460]: time="2024-08-05T22:44:09.576187331Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:44:09.685798 containerd[1460]: time="2024-08-05T22:44:09.683922129Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:44:09.685798 containerd[1460]: time="2024-08-05T22:44:09.684029760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:44:09.696665 containerd[1460]: time="2024-08-05T22:44:09.696586013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:44:09.696665 containerd[1460]: time="2024-08-05T22:44:09.696658940Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:44:09.697086 containerd[1460]: time="2024-08-05T22:44:09.697036483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:44:09.697086 containerd[1460]: time="2024-08-05T22:44:09.697082652Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:44:09.697270 containerd[1460]: time="2024-08-05T22:44:09.697226913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:44:09.697405 containerd[1460]: time="2024-08-05T22:44:09.697367544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:44:09.697468 containerd[1460]: time="2024-08-05T22:44:09.697406408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:44:09.698325 containerd[1460]: time="2024-08-05T22:44:09.697526783Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:44:09.698325 containerd[1460]: time="2024-08-05T22:44:09.697894684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:44:09.698325 containerd[1460]: time="2024-08-05T22:44:09.697925385Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:44:09.698325 containerd[1460]: time="2024-08-05T22:44:09.697946483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:44:09.698325 containerd[1460]: time="2024-08-05T22:44:09.698134799Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:44:09.698325 containerd[1460]: time="2024-08-05T22:44:09.698161489Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:44:09.700527 containerd[1460]: time="2024-08-05T22:44:09.698243731Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:44:09.700527 containerd[1460]: time="2024-08-05T22:44:09.698747273Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717537183Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717613359Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717637898Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717692879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717718393Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717737138Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717758891Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.717982917Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.718013665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.718044817Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.718068274Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.718097739Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.718129681Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719291 containerd[1460]: time="2024-08-05T22:44:09.718155554Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718178347Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718203632Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718227678Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718250835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718292816Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718466948Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718883576Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718927631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718951868Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.718988900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.719079214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.719105439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.719127682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.719953 containerd[1460]: time="2024-08-05T22:44:09.719148548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.720606 containerd[1460]: time="2024-08-05T22:44:09.719171218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.720606 containerd[1460]: time="2024-08-05T22:44:09.719195614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.720606 containerd[1460]: time="2024-08-05T22:44:09.719225243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.720731 sshd[1520]: Accepted publickey for core from 139.178.68.195 port 40228 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:09.721702 containerd[1460]: time="2024-08-05T22:44:09.719246770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.722690 containerd[1460]: time="2024-08-05T22:44:09.722327787Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:44:09.723033 containerd[1460]: time="2024-08-05T22:44:09.723000895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.723178 containerd[1460]: time="2024-08-05T22:44:09.723155349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.723290 containerd[1460]: time="2024-08-05T22:44:09.723269733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.723390 containerd[1460]: time="2024-08-05T22:44:09.723371883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.726875 containerd[1460]: time="2024-08-05T22:44:09.726833942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.727039 containerd[1460]: time="2024-08-05T22:44:09.727016290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.727174 containerd[1460]: time="2024-08-05T22:44:09.727149131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.727287 containerd[1460]: time="2024-08-05T22:44:09.727249529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:44:09.729197 sshd[1520]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:09.731075 containerd[1460]: time="2024-08-05T22:44:09.729818774Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:44:09.731075 containerd[1460]: time="2024-08-05T22:44:09.730322152Z" level=info msg="Connect containerd service" Aug 5 22:44:09.731075 containerd[1460]: time="2024-08-05T22:44:09.730398375Z" level=info msg="using legacy CRI server" Aug 5 22:44:09.731075 containerd[1460]: time="2024-08-05T22:44:09.730412047Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:44:09.731075 containerd[1460]: time="2024-08-05T22:44:09.730556782Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:44:09.732858 containerd[1460]: time="2024-08-05T22:44:09.732814657Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733433534Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733559350Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733492525Z" level=info msg="Start subscribing containerd event" Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733671518Z" level=info msg="Start recovering state" Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733776061Z" level=info msg="Start event monitor" Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733795150Z" level=info msg="Start snapshots syncer" Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733812152Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:44:09.734231 containerd[1460]: time="2024-08-05T22:44:09.733825014Z" level=info msg="Start streaming server" Aug 5 22:44:09.736857 containerd[1460]: time="2024-08-05T22:44:09.735144557Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:44:09.736857 containerd[1460]: time="2024-08-05T22:44:09.735195911Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:44:09.736857 containerd[1460]: time="2024-08-05T22:44:09.735543360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:44:09.736857 containerd[1460]: time="2024-08-05T22:44:09.735607179Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:44:09.735798 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:44:09.738999 containerd[1460]: time="2024-08-05T22:44:09.737117102Z" level=info msg="containerd successfully booted in 0.175050s" Aug 5 22:44:09.767047 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:44:09.786339 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:44:09.807657 systemd-logind[1449]: New session 1 of user core. Aug 5 22:44:09.838681 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:44:09.866800 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:44:09.911973 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:10.140740 tar[1458]: linux-amd64/LICENSE Aug 5 22:44:10.140740 tar[1458]: linux-amd64/README.md Aug 5 22:44:10.156072 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:44:10.213082 systemd[1557]: Queued start job for default target default.target. Aug 5 22:44:10.220198 systemd[1557]: Created slice app.slice - User Application Slice. Aug 5 22:44:10.220997 systemd[1557]: Reached target paths.target - Paths. Aug 5 22:44:10.221027 systemd[1557]: Reached target timers.target - Timers. Aug 5 22:44:10.224773 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:44:10.259122 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:44:10.259295 systemd[1557]: Reached target sockets.target - Sockets. Aug 5 22:44:10.260089 systemd[1557]: Reached target basic.target - Basic System. Aug 5 22:44:10.260215 systemd[1557]: Reached target default.target - Main User Target. Aug 5 22:44:10.260291 systemd[1557]: Startup finished in 333ms. Aug 5 22:44:10.260456 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:44:10.278551 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:44:10.366887 instance-setup[1508]: INFO Running google_set_multiqueue. Aug 5 22:44:10.388381 instance-setup[1508]: INFO Set channels for eth0 to 2. Aug 5 22:44:10.394229 instance-setup[1508]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Aug 5 22:44:10.396733 instance-setup[1508]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Aug 5 22:44:10.397036 instance-setup[1508]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Aug 5 22:44:10.398751 instance-setup[1508]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Aug 5 22:44:10.398996 instance-setup[1508]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Aug 5 22:44:10.401213 instance-setup[1508]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Aug 5 22:44:10.401460 instance-setup[1508]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Aug 5 22:44:10.403304 instance-setup[1508]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Aug 5 22:44:10.413801 instance-setup[1508]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 5 22:44:10.418398 instance-setup[1508]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Aug 5 22:44:10.420593 instance-setup[1508]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Aug 5 22:44:10.420683 instance-setup[1508]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Aug 5 22:44:10.492288 init.sh[1502]: + /usr/bin/google_metadata_script_runner --script-type startup Aug 5 22:44:10.538756 systemd[1]: Started sshd@1-10.128.0.28:22-139.178.68.195:34060.service - OpenSSH per-connection server daemon (139.178.68.195:34060). Aug 5 22:44:10.766582 startup-script[1600]: INFO Starting startup scripts. Aug 5 22:44:10.773328 startup-script[1600]: INFO No startup scripts found in metadata. Aug 5 22:44:10.773454 startup-script[1600]: INFO Finished running startup scripts. Aug 5 22:44:10.815163 init.sh[1502]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Aug 5 22:44:10.815163 init.sh[1502]: + daemon_pids=() Aug 5 22:44:10.815163 init.sh[1502]: + for d in accounts clock_skew network Aug 5 22:44:10.815163 init.sh[1502]: + daemon_pids+=($!) Aug 5 22:44:10.815163 init.sh[1502]: + for d in accounts clock_skew network Aug 5 22:44:10.815517 init.sh[1606]: + /usr/bin/google_accounts_daemon Aug 5 22:44:10.815878 init.sh[1502]: + daemon_pids+=($!) Aug 5 22:44:10.815878 init.sh[1502]: + for d in accounts clock_skew network Aug 5 22:44:10.815878 init.sh[1502]: + daemon_pids+=($!) Aug 5 22:44:10.815878 init.sh[1502]: + NOTIFY_SOCKET=/run/systemd/notify Aug 5 22:44:10.815878 init.sh[1502]: + /usr/bin/systemd-notify --ready Aug 5 22:44:10.816405 init.sh[1607]: + /usr/bin/google_clock_skew_daemon Aug 5 22:44:10.817577 init.sh[1608]: + /usr/bin/google_network_daemon Aug 5 22:44:10.849703 systemd[1]: Started oem-gce.service - GCE Linux Agent. Aug 5 22:44:10.863363 init.sh[1502]: + wait -n 1606 1607 1608 Aug 5 22:44:10.896246 sshd[1602]: Accepted publickey for core from 139.178.68.195 port 34060 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:10.901092 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:10.914618 systemd-logind[1449]: New session 2 of user core. Aug 5 22:44:10.923594 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:44:11.133825 sshd[1602]: pam_unix(sshd:session): session closed for user core Aug 5 22:44:11.148468 systemd[1]: sshd@1-10.128.0.28:22-139.178.68.195:34060.service: Deactivated successfully. Aug 5 22:44:11.154043 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:44:11.157995 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:44:11.162046 systemd-logind[1449]: Removed session 2. Aug 5 22:44:11.163090 ntpd[1428]: 5 Aug 22:44:11 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1c%2]:123 Aug 5 22:44:11.162490 ntpd[1428]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:1c%2]:123 Aug 5 22:44:11.201851 systemd[1]: Started sshd@2-10.128.0.28:22-139.178.68.195:34072.service - OpenSSH per-connection server daemon (139.178.68.195:34072). Aug 5 22:44:11.321725 google-networking[1608]: INFO Starting Google Networking daemon. Aug 5 22:44:11.334517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:11.347341 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:44:11.353331 (kubelet)[1625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:44:11.357687 systemd[1]: Startup finished in 1.057s (kernel) + 9.374s (initrd) + 9.282s (userspace) = 19.714s. Aug 5 22:44:11.443063 google-clock-skew[1607]: INFO Starting Google Clock Skew daemon. Aug 5 22:44:11.459682 google-clock-skew[1607]: INFO Clock drift token has changed: 0. Aug 5 22:44:12.000285 systemd-resolved[1315]: Clock change detected. Flushing caches. Aug 5 22:44:12.001014 google-clock-skew[1607]: INFO Synced system time with hardware clock. Aug 5 22:44:12.017317 groupadd[1635]: group added to /etc/group: name=google-sudoers, GID=1000 Aug 5 22:44:12.023547 groupadd[1635]: group added to /etc/gshadow: name=google-sudoers Aug 5 22:44:12.037577 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 34072 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:12.038735 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:12.046926 systemd-logind[1449]: New session 3 of user core. Aug 5 22:44:12.052732 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:44:12.098693 groupadd[1635]: new group: name=google-sudoers, GID=1000 Aug 5 22:44:12.133745 google-accounts[1606]: INFO Starting Google Accounts daemon. Aug 5 22:44:12.147036 google-accounts[1606]: WARNING OS Login not installed. Aug 5 22:44:12.150247 google-accounts[1606]: INFO Creating a new user account for 0. Aug 5 22:44:12.156642 init.sh[1649]: useradd: invalid user name '0': use --badname to ignore Aug 5 22:44:12.156983 google-accounts[1606]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Aug 5 22:44:12.256987 sshd[1615]: pam_unix(sshd:session): session closed for user core Aug 5 22:44:12.263914 systemd[1]: sshd@2-10.128.0.28:22-139.178.68.195:34072.service: Deactivated successfully. Aug 5 22:44:12.266816 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:44:12.270287 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:44:12.272164 systemd-logind[1449]: Removed session 3. Aug 5 22:44:12.785242 kubelet[1625]: E0805 22:44:12.785173 1625 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:44:12.788322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:44:12.788620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:44:12.789082 systemd[1]: kubelet.service: Consumed 1.298s CPU time. Aug 5 22:44:22.321141 systemd[1]: Started sshd@3-10.128.0.28:22-139.178.68.195:41934.service - OpenSSH per-connection server daemon (139.178.68.195:41934). Aug 5 22:44:22.608274 sshd[1657]: Accepted publickey for core from 139.178.68.195 port 41934 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:22.610758 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:22.617720 systemd-logind[1449]: New session 4 of user core. Aug 5 22:44:22.625828 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:44:22.826060 sshd[1657]: pam_unix(sshd:session): session closed for user core Aug 5 22:44:22.831552 systemd[1]: sshd@3-10.128.0.28:22-139.178.68.195:41934.service: Deactivated successfully. Aug 5 22:44:22.834423 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:44:22.836041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:44:22.838323 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:44:22.845864 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:22.847826 systemd-logind[1449]: Removed session 4. Aug 5 22:44:22.883037 systemd[1]: Started sshd@4-10.128.0.28:22-139.178.68.195:41938.service - OpenSSH per-connection server daemon (139.178.68.195:41938). Aug 5 22:44:23.160737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:23.173331 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:44:23.186075 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 41938 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:23.188628 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:23.198750 systemd-logind[1449]: New session 5 of user core. Aug 5 22:44:23.203724 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:44:23.246852 kubelet[1674]: E0805 22:44:23.246733 1674 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:44:23.251995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:44:23.252333 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:44:23.399193 sshd[1667]: pam_unix(sshd:session): session closed for user core Aug 5 22:44:23.403531 systemd[1]: sshd@4-10.128.0.28:22-139.178.68.195:41938.service: Deactivated successfully. Aug 5 22:44:23.405915 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:44:23.407778 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:44:23.409259 systemd-logind[1449]: Removed session 5. Aug 5 22:44:23.456891 systemd[1]: Started sshd@5-10.128.0.28:22-139.178.68.195:41948.service - OpenSSH per-connection server daemon (139.178.68.195:41948). Aug 5 22:44:23.750691 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 41948 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:23.752596 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:23.758310 systemd-logind[1449]: New session 6 of user core. Aug 5 22:44:23.768765 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:44:23.966259 sshd[1687]: pam_unix(sshd:session): session closed for user core Aug 5 22:44:23.970845 systemd[1]: sshd@5-10.128.0.28:22-139.178.68.195:41948.service: Deactivated successfully. Aug 5 22:44:23.973329 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:44:23.975116 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:44:23.976824 systemd-logind[1449]: Removed session 6. Aug 5 22:44:24.025884 systemd[1]: Started sshd@6-10.128.0.28:22-139.178.68.195:41960.service - OpenSSH per-connection server daemon (139.178.68.195:41960). Aug 5 22:44:24.322861 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 41960 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:24.324881 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:24.330578 systemd-logind[1449]: New session 7 of user core. Aug 5 22:44:24.336749 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:44:24.520446 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:44:24.521035 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:44:24.536362 sudo[1697]: pam_unix(sudo:session): session closed for user root Aug 5 22:44:24.579680 sshd[1694]: pam_unix(sshd:session): session closed for user core Aug 5 22:44:24.585169 systemd[1]: sshd@6-10.128.0.28:22-139.178.68.195:41960.service: Deactivated successfully. Aug 5 22:44:24.587907 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:44:24.590097 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:44:24.591826 systemd-logind[1449]: Removed session 7. Aug 5 22:44:24.638924 systemd[1]: Started sshd@7-10.128.0.28:22-139.178.68.195:41974.service - OpenSSH per-connection server daemon (139.178.68.195:41974). Aug 5 22:44:24.929125 sshd[1702]: Accepted publickey for core from 139.178.68.195 port 41974 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:24.931155 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:24.937743 systemd-logind[1449]: New session 8 of user core. Aug 5 22:44:24.947850 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:44:25.111505 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:44:25.111979 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:44:25.117039 sudo[1706]: pam_unix(sudo:session): session closed for user root Aug 5 22:44:25.131123 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:44:25.131605 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:44:25.150078 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:44:25.154379 auditctl[1709]: No rules Aug 5 22:44:25.155930 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:44:25.156250 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:44:25.163000 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:44:25.206958 augenrules[1727]: No rules Aug 5 22:44:25.209731 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:44:25.211994 sudo[1705]: pam_unix(sudo:session): session closed for user root Aug 5 22:44:25.259111 sshd[1702]: pam_unix(sshd:session): session closed for user core Aug 5 22:44:25.263885 systemd[1]: sshd@7-10.128.0.28:22-139.178.68.195:41974.service: Deactivated successfully. Aug 5 22:44:25.266243 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:44:25.268826 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:44:25.270687 systemd-logind[1449]: Removed session 8. Aug 5 22:44:25.316969 systemd[1]: Started sshd@8-10.128.0.28:22-139.178.68.195:41980.service - OpenSSH per-connection server daemon (139.178.68.195:41980). Aug 5 22:44:25.605744 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 41980 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:44:25.607864 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:44:25.614544 systemd-logind[1449]: New session 9 of user core. Aug 5 22:44:25.619763 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:44:25.787569 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:44:25.788062 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:44:25.945026 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:44:25.946061 (dockerd)[1747]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:44:26.359429 dockerd[1747]: time="2024-08-05T22:44:26.359242066Z" level=info msg="Starting up" Aug 5 22:44:26.454661 dockerd[1747]: time="2024-08-05T22:44:26.454605535Z" level=info msg="Loading containers: start." Aug 5 22:44:26.636631 kernel: Initializing XFRM netlink socket Aug 5 22:44:26.754987 systemd-networkd[1372]: docker0: Link UP Aug 5 22:44:26.778882 dockerd[1747]: time="2024-08-05T22:44:26.778817549Z" level=info msg="Loading containers: done." Aug 5 22:44:26.882015 dockerd[1747]: time="2024-08-05T22:44:26.879662463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:44:26.882015 dockerd[1747]: time="2024-08-05T22:44:26.880005789Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:44:26.882015 dockerd[1747]: time="2024-08-05T22:44:26.880179435Z" level=info msg="Daemon has completed initialization" Aug 5 22:44:26.882179 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3145710313-merged.mount: Deactivated successfully. Aug 5 22:44:26.932492 dockerd[1747]: time="2024-08-05T22:44:26.932287621Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:44:26.934187 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:44:27.900977 containerd[1460]: time="2024-08-05T22:44:27.900907743Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\"" Aug 5 22:44:28.448564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215632740.mount: Deactivated successfully. Aug 5 22:44:30.285486 containerd[1460]: time="2024-08-05T22:44:30.285388945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:30.287266 containerd[1460]: time="2024-08-05T22:44:30.287170310Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.3: active requests=0, bytes read=32779866" Aug 5 22:44:30.289092 containerd[1460]: time="2024-08-05T22:44:30.288983532Z" level=info msg="ImageCreate event name:\"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:30.294891 containerd[1460]: time="2024-08-05T22:44:30.294802651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:30.296748 containerd[1460]: time="2024-08-05T22:44:30.296572271Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.3\" with image id \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\", size \"32770038\" in 2.395607355s" Aug 5 22:44:30.296748 containerd[1460]: time="2024-08-05T22:44:30.296639291Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\" returns image reference \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\"" Aug 5 22:44:30.329289 containerd[1460]: time="2024-08-05T22:44:30.329226081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\"" Aug 5 22:44:31.971217 containerd[1460]: time="2024-08-05T22:44:31.971140918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:31.973054 containerd[1460]: time="2024-08-05T22:44:31.972956046Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.3: active requests=0, bytes read=29591469" Aug 5 22:44:31.975493 containerd[1460]: time="2024-08-05T22:44:31.974618649Z" level=info msg="ImageCreate event name:\"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:31.979930 containerd[1460]: time="2024-08-05T22:44:31.979865461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:31.981808 containerd[1460]: time="2024-08-05T22:44:31.981747855Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.3\" with image id \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\", size \"31139481\" in 1.652471199s" Aug 5 22:44:31.982043 containerd[1460]: time="2024-08-05T22:44:31.982011515Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\" returns image reference \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\"" Aug 5 22:44:32.015537 containerd[1460]: time="2024-08-05T22:44:32.015486802Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\"" Aug 5 22:44:33.334768 containerd[1460]: time="2024-08-05T22:44:33.334686101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:33.336577 containerd[1460]: time="2024-08-05T22:44:33.336474556Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.3: active requests=0, bytes read=17781460" Aug 5 22:44:33.338556 containerd[1460]: time="2024-08-05T22:44:33.338478080Z" level=info msg="ImageCreate event name:\"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:33.344511 containerd[1460]: time="2024-08-05T22:44:33.342900503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:33.344511 containerd[1460]: time="2024-08-05T22:44:33.344391432Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.3\" with image id \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\", size \"19329508\" in 1.328843445s" Aug 5 22:44:33.344511 containerd[1460]: time="2024-08-05T22:44:33.344445204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\" returns image reference \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\"" Aug 5 22:44:33.376553 containerd[1460]: time="2024-08-05T22:44:33.376502387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\"" Aug 5 22:44:33.377896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:44:33.384803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:33.679756 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:33.688099 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:44:33.757127 kubelet[1962]: E0805 22:44:33.757056 1962 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:44:33.759081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:44:33.759311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:44:34.703347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2697098970.mount: Deactivated successfully. Aug 5 22:44:35.314612 containerd[1460]: time="2024-08-05T22:44:35.314537186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:35.316068 containerd[1460]: time="2024-08-05T22:44:35.316005612Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.3: active requests=0, bytes read=29038330" Aug 5 22:44:35.317491 containerd[1460]: time="2024-08-05T22:44:35.317420644Z" level=info msg="ImageCreate event name:\"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:35.322204 containerd[1460]: time="2024-08-05T22:44:35.320780161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:35.322204 containerd[1460]: time="2024-08-05T22:44:35.322005590Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.3\" with image id \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\", repo tag \"registry.k8s.io/kube-proxy:v1.30.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\", size \"29035454\" in 1.945447858s" Aug 5 22:44:35.322204 containerd[1460]: time="2024-08-05T22:44:35.322065326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\" returns image reference \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\"" Aug 5 22:44:35.355375 containerd[1460]: time="2024-08-05T22:44:35.355321709Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:44:35.803903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3819885299.mount: Deactivated successfully. Aug 5 22:44:36.936997 containerd[1460]: time="2024-08-05T22:44:36.936916171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:36.938865 containerd[1460]: time="2024-08-05T22:44:36.938757415Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Aug 5 22:44:36.942259 containerd[1460]: time="2024-08-05T22:44:36.940331584Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:36.947477 containerd[1460]: time="2024-08-05T22:44:36.947348543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:36.950546 containerd[1460]: time="2024-08-05T22:44:36.949772505Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.594388272s" Aug 5 22:44:36.950546 containerd[1460]: time="2024-08-05T22:44:36.949836848Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:44:36.989087 containerd[1460]: time="2024-08-05T22:44:36.988673870Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:44:37.440809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3109243057.mount: Deactivated successfully. Aug 5 22:44:37.450979 containerd[1460]: time="2024-08-05T22:44:37.450853308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:37.452980 containerd[1460]: time="2024-08-05T22:44:37.452876306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Aug 5 22:44:37.456491 containerd[1460]: time="2024-08-05T22:44:37.455027902Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:37.462096 containerd[1460]: time="2024-08-05T22:44:37.462026411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:37.463534 containerd[1460]: time="2024-08-05T22:44:37.463447441Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 474.684406ms" Aug 5 22:44:37.463534 containerd[1460]: time="2024-08-05T22:44:37.463536000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:44:37.496637 containerd[1460]: time="2024-08-05T22:44:37.496589040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Aug 5 22:44:37.949090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2136417440.mount: Deactivated successfully. Aug 5 22:44:39.815352 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 22:44:40.462536 containerd[1460]: time="2024-08-05T22:44:40.462423842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:40.464323 containerd[1460]: time="2024-08-05T22:44:40.464238293Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Aug 5 22:44:40.465766 containerd[1460]: time="2024-08-05T22:44:40.465711254Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:40.469963 containerd[1460]: time="2024-08-05T22:44:40.469872375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:44:40.471805 containerd[1460]: time="2024-08-05T22:44:40.471574058Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.974920405s" Aug 5 22:44:40.471805 containerd[1460]: time="2024-08-05T22:44:40.471632481Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Aug 5 22:44:43.878033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:44:43.887636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:44.194974 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:44:44.195123 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:44:44.195595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:44.211410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:44.242613 systemd[1]: Reloading requested from client PID 2156 ('systemctl') (unit session-9.scope)... Aug 5 22:44:44.242652 systemd[1]: Reloading... Aug 5 22:44:44.393884 zram_generator::config[2198]: No configuration found. Aug 5 22:44:44.546286 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:44:44.663455 systemd[1]: Reloading finished in 419 ms. Aug 5 22:44:44.723810 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:44:44.723994 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:44:44.724446 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:44.734106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:45.917795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:45.922409 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:44:45.992028 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:44:45.992028 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:44:45.992028 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:44:45.994366 kubelet[2242]: I0805 22:44:45.994277 2242 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:44:46.620370 kubelet[2242]: I0805 22:44:46.620305 2242 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:44:46.620370 kubelet[2242]: I0805 22:44:46.620347 2242 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:44:46.620745 kubelet[2242]: I0805 22:44:46.620705 2242 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:44:46.652978 kubelet[2242]: I0805 22:44:46.652095 2242 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:44:46.653543 kubelet[2242]: E0805 22:44:46.653156 2242 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.674124 kubelet[2242]: I0805 22:44:46.673089 2242 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:44:46.674124 kubelet[2242]: I0805 22:44:46.673543 2242 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:44:46.674124 kubelet[2242]: I0805 22:44:46.673587 2242 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:44:46.674124 kubelet[2242]: I0805 22:44:46.673809 2242 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:44:46.674488 kubelet[2242]: I0805 22:44:46.673821 2242 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:44:46.674488 kubelet[2242]: I0805 22:44:46.673981 2242 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:44:46.675634 kubelet[2242]: I0805 22:44:46.675598 2242 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:44:46.675634 kubelet[2242]: I0805 22:44:46.675638 2242 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:44:46.675815 kubelet[2242]: I0805 22:44:46.675673 2242 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:44:46.675815 kubelet[2242]: I0805 22:44:46.675697 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:44:46.682545 kubelet[2242]: W0805 22:44:46.682426 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.682545 kubelet[2242]: E0805 22:44:46.682524 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.682790 kubelet[2242]: W0805 22:44:46.682633 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.682790 kubelet[2242]: E0805 22:44:46.682683 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.683657 kubelet[2242]: I0805 22:44:46.683298 2242 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:44:46.687197 kubelet[2242]: I0805 22:44:46.685733 2242 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:44:46.687197 kubelet[2242]: W0805 22:44:46.685859 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:44:46.691488 kubelet[2242]: I0805 22:44:46.689212 2242 server.go:1264] "Started kubelet" Aug 5 22:44:46.695058 kubelet[2242]: I0805 22:44:46.694990 2242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:44:46.699489 kubelet[2242]: I0805 22:44:46.697662 2242 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:44:46.702326 kubelet[2242]: I0805 22:44:46.702235 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:44:46.703025 kubelet[2242]: I0805 22:44:46.702992 2242 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:44:46.703881 kubelet[2242]: E0805 22:44:46.703680 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal.17e8f67fe860d02f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,UID:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,},FirstTimestamp:2024-08-05 22:44:46.689153071 +0000 UTC m=+0.760305048,LastTimestamp:2024-08-05 22:44:46.689153071 +0000 UTC m=+0.760305048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,}" Aug 5 22:44:46.704526 kubelet[2242]: I0805 22:44:46.704500 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:44:46.711657 kubelet[2242]: I0805 22:44:46.710335 2242 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:44:46.712152 kubelet[2242]: I0805 22:44:46.712126 2242 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:44:46.712369 kubelet[2242]: I0805 22:44:46.712352 2242 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:44:46.714302 kubelet[2242]: W0805 22:44:46.714227 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.714660 kubelet[2242]: E0805 22:44:46.714637 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.715799 kubelet[2242]: E0805 22:44:46.715703 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="200ms" Aug 5 22:44:46.716491 kubelet[2242]: E0805 22:44:46.716446 2242 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:44:46.717286 kubelet[2242]: I0805 22:44:46.716855 2242 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:44:46.717286 kubelet[2242]: I0805 22:44:46.716959 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:44:46.721512 kubelet[2242]: I0805 22:44:46.720504 2242 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:44:46.737220 kubelet[2242]: I0805 22:44:46.737144 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:44:46.739367 kubelet[2242]: I0805 22:44:46.739324 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:44:46.739367 kubelet[2242]: I0805 22:44:46.739369 2242 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:44:46.739625 kubelet[2242]: I0805 22:44:46.739400 2242 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:44:46.740554 kubelet[2242]: E0805 22:44:46.740511 2242 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:44:46.747395 kubelet[2242]: W0805 22:44:46.747335 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.747395 kubelet[2242]: E0805 22:44:46.747389 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:46.763875 kubelet[2242]: I0805 22:44:46.763824 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:44:46.763875 kubelet[2242]: I0805 22:44:46.763847 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:44:46.763875 kubelet[2242]: I0805 22:44:46.763888 2242 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:44:46.767213 kubelet[2242]: I0805 22:44:46.767153 2242 policy_none.go:49] "None policy: Start" Aug 5 22:44:46.768775 kubelet[2242]: I0805 22:44:46.768292 2242 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:44:46.768775 kubelet[2242]: I0805 22:44:46.768324 2242 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:44:46.777318 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:44:46.796827 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:44:46.802278 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:44:46.812278 kubelet[2242]: I0805 22:44:46.812237 2242 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:44:46.813373 kubelet[2242]: I0805 22:44:46.812809 2242 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:44:46.813373 kubelet[2242]: I0805 22:44:46.813047 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:44:46.816826 kubelet[2242]: E0805 22:44:46.816785 2242 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" not found" Aug 5 22:44:46.830692 kubelet[2242]: I0805 22:44:46.830625 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:46.831234 kubelet[2242]: E0805 22:44:46.831165 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:46.841666 kubelet[2242]: I0805 22:44:46.841566 2242 topology_manager.go:215] "Topology Admit Handler" podUID="f61112cc80d993107048f9800d4c6284" podNamespace="kube-system" podName="kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:46.849343 kubelet[2242]: I0805 22:44:46.849288 2242 topology_manager.go:215] "Topology Admit Handler" podUID="014a586b24bb95bd5307cac345e3de2c" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:46.854714 kubelet[2242]: I0805 22:44:46.854280 2242 topology_manager.go:215] "Topology Admit Handler" podUID="b84aebcad54618cfeae6ba3076718644" podNamespace="kube-system" podName="kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:46.864967 systemd[1]: Created slice kubepods-burstable-podf61112cc80d993107048f9800d4c6284.slice - libcontainer container kubepods-burstable-podf61112cc80d993107048f9800d4c6284.slice. Aug 5 22:44:46.880549 systemd[1]: Created slice kubepods-burstable-pod014a586b24bb95bd5307cac345e3de2c.slice - libcontainer container kubepods-burstable-pod014a586b24bb95bd5307cac345e3de2c.slice. Aug 5 22:44:46.895785 systemd[1]: Created slice kubepods-burstable-podb84aebcad54618cfeae6ba3076718644.slice - libcontainer container kubepods-burstable-podb84aebcad54618cfeae6ba3076718644.slice. Aug 5 22:44:46.916708 kubelet[2242]: E0805 22:44:46.916636 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="400ms" Aug 5 22:44:47.014163 kubelet[2242]: I0805 22:44:47.014090 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-ca-certs\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014755 kubelet[2242]: I0805 22:44:47.014161 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-k8s-certs\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014755 kubelet[2242]: I0805 22:44:47.014213 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-kubeconfig\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014755 kubelet[2242]: I0805 22:44:47.014240 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014755 kubelet[2242]: I0805 22:44:47.014269 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b84aebcad54618cfeae6ba3076718644-kubeconfig\") pod \"kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"b84aebcad54618cfeae6ba3076718644\") " pod="kube-system/kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014924 kubelet[2242]: I0805 22:44:47.014297 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f61112cc80d993107048f9800d4c6284-ca-certs\") pod \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"f61112cc80d993107048f9800d4c6284\") " pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014924 kubelet[2242]: I0805 22:44:47.014330 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f61112cc80d993107048f9800d4c6284-k8s-certs\") pod \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"f61112cc80d993107048f9800d4c6284\") " pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014924 kubelet[2242]: I0805 22:44:47.014366 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f61112cc80d993107048f9800d4c6284-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"f61112cc80d993107048f9800d4c6284\") " pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.014924 kubelet[2242]: I0805 22:44:47.014412 2242 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.038990 kubelet[2242]: I0805 22:44:47.038893 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.039393 kubelet[2242]: E0805 22:44:47.039340 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.176725 containerd[1460]: time="2024-08-05T22:44:47.176570717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,Uid:f61112cc80d993107048f9800d4c6284,Namespace:kube-system,Attempt:0,}" Aug 5 22:44:47.194419 containerd[1460]: time="2024-08-05T22:44:47.194185146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,Uid:014a586b24bb95bd5307cac345e3de2c,Namespace:kube-system,Attempt:0,}" Aug 5 22:44:47.200286 containerd[1460]: time="2024-08-05T22:44:47.200219748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,Uid:b84aebcad54618cfeae6ba3076718644,Namespace:kube-system,Attempt:0,}" Aug 5 22:44:47.318009 kubelet[2242]: E0805 22:44:47.317900 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="800ms" Aug 5 22:44:47.445285 kubelet[2242]: I0805 22:44:47.445078 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.445765 kubelet[2242]: E0805 22:44:47.445683 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:47.632034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408534359.mount: Deactivated successfully. Aug 5 22:44:47.646578 kubelet[2242]: W0805 22:44:47.646490 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:47.646578 kubelet[2242]: E0805 22:44:47.646579 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.28:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:47.649306 containerd[1460]: time="2024-08-05T22:44:47.649236067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:44:47.651132 containerd[1460]: time="2024-08-05T22:44:47.651033662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Aug 5 22:44:47.653296 containerd[1460]: time="2024-08-05T22:44:47.653226083Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:44:47.654983 containerd[1460]: time="2024-08-05T22:44:47.654910021Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:44:47.656860 containerd[1460]: time="2024-08-05T22:44:47.656784129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:44:47.658920 containerd[1460]: time="2024-08-05T22:44:47.658810690Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:44:47.659672 containerd[1460]: time="2024-08-05T22:44:47.659531757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:44:47.669236 containerd[1460]: time="2024-08-05T22:44:47.668767291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:44:47.672722 containerd[1460]: time="2024-08-05T22:44:47.672652249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 472.287928ms" Aug 5 22:44:47.675664 containerd[1460]: time="2024-08-05T22:44:47.675602995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 498.889421ms" Aug 5 22:44:47.690243 containerd[1460]: time="2024-08-05T22:44:47.690168001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 495.85752ms" Aug 5 22:44:47.896495 containerd[1460]: time="2024-08-05T22:44:47.893444339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:44:47.896495 containerd[1460]: time="2024-08-05T22:44:47.893558925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:44:47.896495 containerd[1460]: time="2024-08-05T22:44:47.893592290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:44:47.896495 containerd[1460]: time="2024-08-05T22:44:47.893617490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:44:47.902526 containerd[1460]: time="2024-08-05T22:44:47.901983819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:44:47.902526 containerd[1460]: time="2024-08-05T22:44:47.902065340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:44:47.902526 containerd[1460]: time="2024-08-05T22:44:47.902118263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:44:47.902526 containerd[1460]: time="2024-08-05T22:44:47.902145261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:44:47.913286 containerd[1460]: time="2024-08-05T22:44:47.913133182Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:44:47.913626 containerd[1460]: time="2024-08-05T22:44:47.913562836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:44:47.914065 containerd[1460]: time="2024-08-05T22:44:47.913795378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:44:47.914065 containerd[1460]: time="2024-08-05T22:44:47.913916502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:44:47.937784 systemd[1]: Started cri-containerd-bf579629df85c7c17d4edd90f0d5c7a8f880599881ffd78fb8a42284ee535b25.scope - libcontainer container bf579629df85c7c17d4edd90f0d5c7a8f880599881ffd78fb8a42284ee535b25. Aug 5 22:44:47.963803 systemd[1]: Started cri-containerd-5e2cb850eb788e75e66489cdac759df5fa91a29f3b359431db2abf59adda8da8.scope - libcontainer container 5e2cb850eb788e75e66489cdac759df5fa91a29f3b359431db2abf59adda8da8. Aug 5 22:44:47.985418 systemd[1]: Started cri-containerd-17122613a0fb07d97db138f50cdc117fb721e6bc3d3e096a5a7fc829e56115ea.scope - libcontainer container 17122613a0fb07d97db138f50cdc117fb721e6bc3d3e096a5a7fc829e56115ea. Aug 5 22:44:48.072706 containerd[1460]: time="2024-08-05T22:44:48.072410549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,Uid:b84aebcad54618cfeae6ba3076718644,Namespace:kube-system,Attempt:0,} returns sandbox id \"bf579629df85c7c17d4edd90f0d5c7a8f880599881ffd78fb8a42284ee535b25\"" Aug 5 22:44:48.077650 kubelet[2242]: E0805 22:44:48.076754 2242 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-21291" Aug 5 22:44:48.082064 containerd[1460]: time="2024-08-05T22:44:48.082007172Z" level=info msg="CreateContainer within sandbox \"bf579629df85c7c17d4edd90f0d5c7a8f880599881ffd78fb8a42284ee535b25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:44:48.091183 containerd[1460]: time="2024-08-05T22:44:48.091000036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,Uid:f61112cc80d993107048f9800d4c6284,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e2cb850eb788e75e66489cdac759df5fa91a29f3b359431db2abf59adda8da8\"" Aug 5 22:44:48.094162 kubelet[2242]: E0805 22:44:48.093921 2242 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-21291" Aug 5 22:44:48.098266 containerd[1460]: time="2024-08-05T22:44:48.098107802Z" level=info msg="CreateContainer within sandbox \"5e2cb850eb788e75e66489cdac759df5fa91a29f3b359431db2abf59adda8da8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:44:48.100585 kubelet[2242]: W0805 22:44:48.100490 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:48.100585 kubelet[2242]: E0805 22:44:48.100593 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.28:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:48.108600 kubelet[2242]: W0805 22:44:48.108442 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:48.108868 kubelet[2242]: E0805 22:44:48.108614 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:48.111134 containerd[1460]: time="2024-08-05T22:44:48.110689520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,Uid:014a586b24bb95bd5307cac345e3de2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"17122613a0fb07d97db138f50cdc117fb721e6bc3d3e096a5a7fc829e56115ea\"" Aug 5 22:44:48.114068 kubelet[2242]: E0805 22:44:48.113907 2242 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flat" Aug 5 22:44:48.116254 containerd[1460]: time="2024-08-05T22:44:48.115817834Z" level=info msg="CreateContainer within sandbox \"bf579629df85c7c17d4edd90f0d5c7a8f880599881ffd78fb8a42284ee535b25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e23dab63a46c8978e5e3978b3d570ff875cc680d4d89162f82b963261a85237\"" Aug 5 22:44:48.116880 containerd[1460]: time="2024-08-05T22:44:48.116835781Z" level=info msg="StartContainer for \"9e23dab63a46c8978e5e3978b3d570ff875cc680d4d89162f82b963261a85237\"" Aug 5 22:44:48.118612 containerd[1460]: time="2024-08-05T22:44:48.117993458Z" level=info msg="CreateContainer within sandbox \"17122613a0fb07d97db138f50cdc117fb721e6bc3d3e096a5a7fc829e56115ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:44:48.118761 kubelet[2242]: E0805 22:44:48.118519 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.28:6443: connect: connection refused" interval="1.6s" Aug 5 22:44:48.139257 containerd[1460]: time="2024-08-05T22:44:48.139146312Z" level=info msg="CreateContainer within sandbox \"5e2cb850eb788e75e66489cdac759df5fa91a29f3b359431db2abf59adda8da8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2c5e344444d319af1262264366f0ff797def563f97c0e23d800039ade091eb98\"" Aug 5 22:44:48.140599 containerd[1460]: time="2024-08-05T22:44:48.139961310Z" level=info msg="StartContainer for \"2c5e344444d319af1262264366f0ff797def563f97c0e23d800039ade091eb98\"" Aug 5 22:44:48.163956 containerd[1460]: time="2024-08-05T22:44:48.163798644Z" level=info msg="CreateContainer within sandbox \"17122613a0fb07d97db138f50cdc117fb721e6bc3d3e096a5a7fc829e56115ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"00cf3725bd1ae0f306b8ef7e221f43fa565ed752dbf21e9934f698a8f2c9e879\"" Aug 5 22:44:48.166142 containerd[1460]: time="2024-08-05T22:44:48.166059498Z" level=info msg="StartContainer for \"00cf3725bd1ae0f306b8ef7e221f43fa565ed752dbf21e9934f698a8f2c9e879\"" Aug 5 22:44:48.183735 systemd[1]: Started cri-containerd-9e23dab63a46c8978e5e3978b3d570ff875cc680d4d89162f82b963261a85237.scope - libcontainer container 9e23dab63a46c8978e5e3978b3d570ff875cc680d4d89162f82b963261a85237. Aug 5 22:44:48.211784 kubelet[2242]: W0805 22:44:48.210184 2242 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:48.211784 kubelet[2242]: E0805 22:44:48.210524 2242 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.28:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.28:6443: connect: connection refused Aug 5 22:44:48.234799 systemd[1]: Started cri-containerd-2c5e344444d319af1262264366f0ff797def563f97c0e23d800039ade091eb98.scope - libcontainer container 2c5e344444d319af1262264366f0ff797def563f97c0e23d800039ade091eb98. Aug 5 22:44:48.255725 kubelet[2242]: I0805 22:44:48.255675 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:48.256186 kubelet[2242]: E0805 22:44:48.256139 2242 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.28:6443/api/v1/nodes\": dial tcp 10.128.0.28:6443: connect: connection refused" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:48.258771 systemd[1]: Started cri-containerd-00cf3725bd1ae0f306b8ef7e221f43fa565ed752dbf21e9934f698a8f2c9e879.scope - libcontainer container 00cf3725bd1ae0f306b8ef7e221f43fa565ed752dbf21e9934f698a8f2c9e879. Aug 5 22:44:48.312611 containerd[1460]: time="2024-08-05T22:44:48.312528747Z" level=info msg="StartContainer for \"9e23dab63a46c8978e5e3978b3d570ff875cc680d4d89162f82b963261a85237\" returns successfully" Aug 5 22:44:48.358073 containerd[1460]: time="2024-08-05T22:44:48.357790170Z" level=info msg="StartContainer for \"2c5e344444d319af1262264366f0ff797def563f97c0e23d800039ade091eb98\" returns successfully" Aug 5 22:44:48.423099 containerd[1460]: time="2024-08-05T22:44:48.422870838Z" level=info msg="StartContainer for \"00cf3725bd1ae0f306b8ef7e221f43fa565ed752dbf21e9934f698a8f2c9e879\" returns successfully" Aug 5 22:44:48.453656 kubelet[2242]: E0805 22:44:48.453488 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal.17e8f67fe860d02f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,UID:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,},FirstTimestamp:2024-08-05 22:44:46.689153071 +0000 UTC m=+0.760305048,LastTimestamp:2024-08-05 22:44:46.689153071 +0000 UTC m=+0.760305048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,}" Aug 5 22:44:49.862640 kubelet[2242]: I0805 22:44:49.861095 2242 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:52.163340 kubelet[2242]: E0805 22:44:52.163269 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" not found" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:52.330332 kubelet[2242]: I0805 22:44:52.330278 2242 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:52.686013 kubelet[2242]: I0805 22:44:52.685677 2242 apiserver.go:52] "Watching apiserver" Aug 5 22:44:52.713002 kubelet[2242]: I0805 22:44:52.712950 2242 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:44:53.138697 kubelet[2242]: W0805 22:44:53.138649 2242 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 5 22:44:53.730663 kubelet[2242]: W0805 22:44:53.730587 2242 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 5 22:44:53.889163 kubelet[2242]: W0805 22:44:53.888352 2242 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 5 22:44:54.132874 update_engine[1451]: I0805 22:44:54.132782 1451 update_attempter.cc:509] Updating boot flags... Aug 5 22:44:54.222983 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2523) Aug 5 22:44:54.363727 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2525) Aug 5 22:44:54.447105 systemd[1]: Reloading requested from client PID 2533 ('systemctl') (unit session-9.scope)... Aug 5 22:44:54.447130 systemd[1]: Reloading... Aug 5 22:44:54.549031 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2525) Aug 5 22:44:54.666512 zram_generator::config[2575]: No configuration found. Aug 5 22:44:54.819089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:44:54.948865 systemd[1]: Reloading finished in 500 ms. Aug 5 22:44:55.038584 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:55.042545 kubelet[2242]: E0805 22:44:55.039626 2242 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal.17e8f67fe860d02f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,UID:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,},FirstTimestamp:2024-08-05 22:44:46.689153071 +0000 UTC m=+0.760305048,LastTimestamp:2024-08-05 22:44:46.689153071 +0000 UTC m=+0.760305048,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal,}" Aug 5 22:44:55.042545 kubelet[2242]: I0805 22:44:55.042017 2242 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:44:55.066569 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:44:55.066955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:55.067049 systemd[1]: kubelet.service: Consumed 1.343s CPU time, 116.0M memory peak, 0B memory swap peak. Aug 5 22:44:55.074952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:44:55.358851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:44:55.374783 (kubelet)[2622]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:44:55.468302 kubelet[2622]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:44:55.468302 kubelet[2622]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:44:55.468302 kubelet[2622]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:44:55.468909 kubelet[2622]: I0805 22:44:55.468403 2622 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:44:55.476825 kubelet[2622]: I0805 22:44:55.476762 2622 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:44:55.476825 kubelet[2622]: I0805 22:44:55.476797 2622 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:44:55.483434 kubelet[2622]: I0805 22:44:55.479989 2622 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:44:55.486610 kubelet[2622]: I0805 22:44:55.486550 2622 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:44:55.488545 kubelet[2622]: I0805 22:44:55.488496 2622 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:44:55.499051 kubelet[2622]: I0805 22:44:55.498987 2622 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:44:55.500347 kubelet[2622]: I0805 22:44:55.499304 2622 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:44:55.500347 kubelet[2622]: I0805 22:44:55.499358 2622 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:44:55.500347 kubelet[2622]: I0805 22:44:55.499680 2622 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:44:55.500347 kubelet[2622]: I0805 22:44:55.499698 2622 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:44:55.500743 kubelet[2622]: I0805 22:44:55.499774 2622 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:44:55.500743 kubelet[2622]: I0805 22:44:55.499897 2622 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:44:55.500743 kubelet[2622]: I0805 22:44:55.499914 2622 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:44:55.500743 kubelet[2622]: I0805 22:44:55.499947 2622 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:44:55.500743 kubelet[2622]: I0805 22:44:55.499968 2622 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:44:55.505302 kubelet[2622]: I0805 22:44:55.501734 2622 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:44:55.505302 kubelet[2622]: I0805 22:44:55.502163 2622 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:44:55.505302 kubelet[2622]: I0805 22:44:55.502825 2622 server.go:1264] "Started kubelet" Aug 5 22:44:55.505581 kubelet[2622]: I0805 22:44:55.505340 2622 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:44:55.514879 kubelet[2622]: I0805 22:44:55.514775 2622 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:44:55.518325 kubelet[2622]: I0805 22:44:55.518280 2622 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:44:55.522676 kubelet[2622]: E0805 22:44:55.522624 2622 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:44:55.523499 kubelet[2622]: I0805 22:44:55.523276 2622 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:44:55.523833 kubelet[2622]: I0805 22:44:55.523773 2622 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:44:55.524271 kubelet[2622]: I0805 22:44:55.524234 2622 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:44:55.530277 kubelet[2622]: I0805 22:44:55.530237 2622 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:44:55.530527 kubelet[2622]: I0805 22:44:55.530507 2622 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:44:55.533856 kubelet[2622]: I0805 22:44:55.533824 2622 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:44:55.534173 kubelet[2622]: I0805 22:44:55.534134 2622 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:44:55.539292 kubelet[2622]: I0805 22:44:55.539263 2622 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:44:55.548653 kubelet[2622]: I0805 22:44:55.548609 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:44:55.553996 kubelet[2622]: I0805 22:44:55.553949 2622 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:44:55.554199 kubelet[2622]: I0805 22:44:55.554187 2622 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:44:55.554383 kubelet[2622]: I0805 22:44:55.554372 2622 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:44:55.554685 kubelet[2622]: E0805 22:44:55.554649 2622 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:44:55.634437 kubelet[2622]: I0805 22:44:55.634321 2622 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.650634 kubelet[2622]: I0805 22:44:55.650520 2622 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.650634 kubelet[2622]: I0805 22:44:55.650639 2622 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.655621 kubelet[2622]: E0805 22:44:55.655538 2622 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:44:55.693549 kubelet[2622]: I0805 22:44:55.692254 2622 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:44:55.693549 kubelet[2622]: I0805 22:44:55.692289 2622 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:44:55.693549 kubelet[2622]: I0805 22:44:55.692315 2622 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:44:55.693549 kubelet[2622]: I0805 22:44:55.692781 2622 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:44:55.693549 kubelet[2622]: I0805 22:44:55.692800 2622 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:44:55.693549 kubelet[2622]: I0805 22:44:55.692857 2622 policy_none.go:49] "None policy: Start" Aug 5 22:44:55.695674 kubelet[2622]: I0805 22:44:55.694366 2622 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:44:55.695674 kubelet[2622]: I0805 22:44:55.694401 2622 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:44:55.695674 kubelet[2622]: I0805 22:44:55.694737 2622 state_mem.go:75] "Updated machine memory state" Aug 5 22:44:55.707047 kubelet[2622]: I0805 22:44:55.705827 2622 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:44:55.707047 kubelet[2622]: I0805 22:44:55.706065 2622 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:44:55.707047 kubelet[2622]: I0805 22:44:55.706800 2622 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:44:55.857909 kubelet[2622]: I0805 22:44:55.856379 2622 topology_manager.go:215] "Topology Admit Handler" podUID="f61112cc80d993107048f9800d4c6284" podNamespace="kube-system" podName="kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.857909 kubelet[2622]: I0805 22:44:55.856557 2622 topology_manager.go:215] "Topology Admit Handler" podUID="014a586b24bb95bd5307cac345e3de2c" podNamespace="kube-system" podName="kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.857909 kubelet[2622]: I0805 22:44:55.856641 2622 topology_manager.go:215] "Topology Admit Handler" podUID="b84aebcad54618cfeae6ba3076718644" podNamespace="kube-system" podName="kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.863597 kubelet[2622]: W0805 22:44:55.863303 2622 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 5 22:44:55.863597 kubelet[2622]: E0805 22:44:55.863442 2622 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.866486 kubelet[2622]: W0805 22:44:55.866407 2622 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 5 22:44:55.866687 kubelet[2622]: E0805 22:44:55.866556 2622 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.867816 kubelet[2622]: W0805 22:44:55.867780 2622 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 5 22:44:55.867979 kubelet[2622]: E0805 22:44:55.867866 2622 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.933592 kubelet[2622]: I0805 22:44:55.933283 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b84aebcad54618cfeae6ba3076718644-kubeconfig\") pod \"kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"b84aebcad54618cfeae6ba3076718644\") " pod="kube-system/kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.933592 kubelet[2622]: I0805 22:44:55.933346 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f61112cc80d993107048f9800d4c6284-ca-certs\") pod \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"f61112cc80d993107048f9800d4c6284\") " pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.933592 kubelet[2622]: I0805 22:44:55.933386 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f61112cc80d993107048f9800d4c6284-k8s-certs\") pod \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"f61112cc80d993107048f9800d4c6284\") " pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.933592 kubelet[2622]: I0805 22:44:55.933419 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-ca-certs\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.934382 kubelet[2622]: I0805 22:44:55.933452 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-kubeconfig\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.934382 kubelet[2622]: I0805 22:44:55.934146 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.934382 kubelet[2622]: I0805 22:44:55.934254 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f61112cc80d993107048f9800d4c6284-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"f61112cc80d993107048f9800d4c6284\") " pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.934382 kubelet[2622]: I0805 22:44:55.934328 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-flexvolume-dir\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:55.934798 kubelet[2622]: I0805 22:44:55.934504 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/014a586b24bb95bd5307cac345e3de2c-k8s-certs\") pod \"kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" (UID: \"014a586b24bb95bd5307cac345e3de2c\") " pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:56.518384 kubelet[2622]: I0805 22:44:56.518335 2622 apiserver.go:52] "Watching apiserver" Aug 5 22:44:56.530982 kubelet[2622]: I0805 22:44:56.530818 2622 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:44:56.658412 kubelet[2622]: W0805 22:44:56.658364 2622 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Aug 5 22:44:56.658640 kubelet[2622]: E0805 22:44:56.658514 2622 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:44:56.753526 kubelet[2622]: I0805 22:44:56.753429 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" podStartSLOduration=3.753384344 podStartE2EDuration="3.753384344s" podCreationTimestamp="2024-08-05 22:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:44:56.739387344 +0000 UTC m=+1.357389592" watchObservedRunningTime="2024-08-05 22:44:56.753384344 +0000 UTC m=+1.371386588" Aug 5 22:44:56.816736 kubelet[2622]: I0805 22:44:56.816551 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" podStartSLOduration=3.816525999 podStartE2EDuration="3.816525999s" podCreationTimestamp="2024-08-05 22:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:44:56.784915672 +0000 UTC m=+1.402917916" watchObservedRunningTime="2024-08-05 22:44:56.816525999 +0000 UTC m=+1.434528237" Aug 5 22:45:00.153705 kubelet[2622]: I0805 22:45:00.153474 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" podStartSLOduration=7.153436912 podStartE2EDuration="7.153436912s" podCreationTimestamp="2024-08-05 22:44:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:44:56.8187097 +0000 UTC m=+1.436711946" watchObservedRunningTime="2024-08-05 22:45:00.153436912 +0000 UTC m=+4.771439275" Aug 5 22:45:01.072748 sudo[1738]: pam_unix(sudo:session): session closed for user root Aug 5 22:45:01.116500 sshd[1735]: pam_unix(sshd:session): session closed for user core Aug 5 22:45:01.123090 systemd[1]: sshd@8-10.128.0.28:22-139.178.68.195:41980.service: Deactivated successfully. Aug 5 22:45:01.126525 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:45:01.126809 systemd[1]: session-9.scope: Consumed 6.618s CPU time, 141.6M memory peak, 0B memory swap peak. Aug 5 22:45:01.127842 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:45:01.129594 systemd-logind[1449]: Removed session 9. Aug 5 22:45:08.535282 kubelet[2622]: I0805 22:45:08.535207 2622 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:45:08.536212 kubelet[2622]: I0805 22:45:08.535985 2622 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:45:08.536356 containerd[1460]: time="2024-08-05T22:45:08.535704589Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:45:09.521597 kubelet[2622]: I0805 22:45:09.521542 2622 topology_manager.go:215] "Topology Admit Handler" podUID="2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf" podNamespace="kube-system" podName="kube-proxy-62vnl" Aug 5 22:45:09.538988 systemd[1]: Created slice kubepods-besteffort-pod2fc03fe8_ebe6_431d_aa7c_88f457d2fcaf.slice - libcontainer container kubepods-besteffort-pod2fc03fe8_ebe6_431d_aa7c_88f457d2fcaf.slice. Aug 5 22:45:09.627831 kubelet[2622]: I0805 22:45:09.627691 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf-kube-proxy\") pod \"kube-proxy-62vnl\" (UID: \"2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf\") " pod="kube-system/kube-proxy-62vnl" Aug 5 22:45:09.627831 kubelet[2622]: I0805 22:45:09.627749 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf-xtables-lock\") pod \"kube-proxy-62vnl\" (UID: \"2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf\") " pod="kube-system/kube-proxy-62vnl" Aug 5 22:45:09.627831 kubelet[2622]: I0805 22:45:09.627790 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf-lib-modules\") pod \"kube-proxy-62vnl\" (UID: \"2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf\") " pod="kube-system/kube-proxy-62vnl" Aug 5 22:45:09.627831 kubelet[2622]: I0805 22:45:09.627824 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbxb\" (UniqueName: \"kubernetes.io/projected/2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf-kube-api-access-zxbxb\") pod \"kube-proxy-62vnl\" (UID: \"2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf\") " pod="kube-system/kube-proxy-62vnl" Aug 5 22:45:09.658520 kubelet[2622]: I0805 22:45:09.658442 2622 topology_manager.go:215] "Topology Admit Handler" podUID="86bc38a1-ed2b-475c-97d2-5656e6c682a9" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-ffwlb" Aug 5 22:45:09.674035 systemd[1]: Created slice kubepods-besteffort-pod86bc38a1_ed2b_475c_97d2_5656e6c682a9.slice - libcontainer container kubepods-besteffort-pod86bc38a1_ed2b_475c_97d2_5656e6c682a9.slice. Aug 5 22:45:09.729509 kubelet[2622]: I0805 22:45:09.729060 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94ksh\" (UniqueName: \"kubernetes.io/projected/86bc38a1-ed2b-475c-97d2-5656e6c682a9-kube-api-access-94ksh\") pod \"tigera-operator-76ff79f7fd-ffwlb\" (UID: \"86bc38a1-ed2b-475c-97d2-5656e6c682a9\") " pod="tigera-operator/tigera-operator-76ff79f7fd-ffwlb" Aug 5 22:45:09.729509 kubelet[2622]: I0805 22:45:09.729136 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86bc38a1-ed2b-475c-97d2-5656e6c682a9-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-ffwlb\" (UID: \"86bc38a1-ed2b-475c-97d2-5656e6c682a9\") " pod="tigera-operator/tigera-operator-76ff79f7fd-ffwlb" Aug 5 22:45:09.852096 containerd[1460]: time="2024-08-05T22:45:09.851745220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62vnl,Uid:2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf,Namespace:kube-system,Attempt:0,}" Aug 5 22:45:09.893560 containerd[1460]: time="2024-08-05T22:45:09.893354752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:09.893560 containerd[1460]: time="2024-08-05T22:45:09.893441609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:09.893560 containerd[1460]: time="2024-08-05T22:45:09.893483542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:09.893560 containerd[1460]: time="2024-08-05T22:45:09.893511249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:09.931815 systemd[1]: Started cri-containerd-9653e8e9c8dab5771e258560846cb10c9667130d798dad8628459a199d4620e0.scope - libcontainer container 9653e8e9c8dab5771e258560846cb10c9667130d798dad8628459a199d4620e0. Aug 5 22:45:09.972074 containerd[1460]: time="2024-08-05T22:45:09.971909728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62vnl,Uid:2fc03fe8-ebe6-431d-aa7c-88f457d2fcaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9653e8e9c8dab5771e258560846cb10c9667130d798dad8628459a199d4620e0\"" Aug 5 22:45:09.978558 containerd[1460]: time="2024-08-05T22:45:09.977408096Z" level=info msg="CreateContainer within sandbox \"9653e8e9c8dab5771e258560846cb10c9667130d798dad8628459a199d4620e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:45:09.982642 containerd[1460]: time="2024-08-05T22:45:09.982096301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-ffwlb,Uid:86bc38a1-ed2b-475c-97d2-5656e6c682a9,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:45:10.021381 containerd[1460]: time="2024-08-05T22:45:10.021083056Z" level=info msg="CreateContainer within sandbox \"9653e8e9c8dab5771e258560846cb10c9667130d798dad8628459a199d4620e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a850e999f4196d7f3abac185840969108ba7a3525fb9cdebf8546e8387be3dc8\"" Aug 5 22:45:10.023489 containerd[1460]: time="2024-08-05T22:45:10.022425608Z" level=info msg="StartContainer for \"a850e999f4196d7f3abac185840969108ba7a3525fb9cdebf8546e8387be3dc8\"" Aug 5 22:45:10.047800 containerd[1460]: time="2024-08-05T22:45:10.047683864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:10.049785 containerd[1460]: time="2024-08-05T22:45:10.049694769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:10.049965 containerd[1460]: time="2024-08-05T22:45:10.049823818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:10.049965 containerd[1460]: time="2024-08-05T22:45:10.049892755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:10.084774 systemd[1]: Started cri-containerd-9c9928bb16b9b7df83853ea40411ec0977cabc9f3d114bfd1b011e21931d53a8.scope - libcontainer container 9c9928bb16b9b7df83853ea40411ec0977cabc9f3d114bfd1b011e21931d53a8. Aug 5 22:45:10.086948 systemd[1]: Started cri-containerd-a850e999f4196d7f3abac185840969108ba7a3525fb9cdebf8546e8387be3dc8.scope - libcontainer container a850e999f4196d7f3abac185840969108ba7a3525fb9cdebf8546e8387be3dc8. Aug 5 22:45:10.141578 containerd[1460]: time="2024-08-05T22:45:10.141515781Z" level=info msg="StartContainer for \"a850e999f4196d7f3abac185840969108ba7a3525fb9cdebf8546e8387be3dc8\" returns successfully" Aug 5 22:45:10.179840 containerd[1460]: time="2024-08-05T22:45:10.179788756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-ffwlb,Uid:86bc38a1-ed2b-475c-97d2-5656e6c682a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9c9928bb16b9b7df83853ea40411ec0977cabc9f3d114bfd1b011e21931d53a8\"" Aug 5 22:45:10.183395 containerd[1460]: time="2024-08-05T22:45:10.183205538Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:45:11.434555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3279126493.mount: Deactivated successfully. Aug 5 22:45:12.229074 containerd[1460]: time="2024-08-05T22:45:12.229004102Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:12.231330 containerd[1460]: time="2024-08-05T22:45:12.231259510Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Aug 5 22:45:12.233499 containerd[1460]: time="2024-08-05T22:45:12.232218432Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:12.245557 containerd[1460]: time="2024-08-05T22:45:12.245454335Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:12.250699 containerd[1460]: time="2024-08-05T22:45:12.250606934Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.065671889s" Aug 5 22:45:12.250699 containerd[1460]: time="2024-08-05T22:45:12.250670392Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:45:12.254090 containerd[1460]: time="2024-08-05T22:45:12.254019561Z" level=info msg="CreateContainer within sandbox \"9c9928bb16b9b7df83853ea40411ec0977cabc9f3d114bfd1b011e21931d53a8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:45:12.282006 containerd[1460]: time="2024-08-05T22:45:12.281939469Z" level=info msg="CreateContainer within sandbox \"9c9928bb16b9b7df83853ea40411ec0977cabc9f3d114bfd1b011e21931d53a8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"104db00bc7d9f0b20e1003ad743e8357b91574a6d0bc27401dcb1db732d21f95\"" Aug 5 22:45:12.283101 containerd[1460]: time="2024-08-05T22:45:12.283025947Z" level=info msg="StartContainer for \"104db00bc7d9f0b20e1003ad743e8357b91574a6d0bc27401dcb1db732d21f95\"" Aug 5 22:45:12.332787 systemd[1]: Started cri-containerd-104db00bc7d9f0b20e1003ad743e8357b91574a6d0bc27401dcb1db732d21f95.scope - libcontainer container 104db00bc7d9f0b20e1003ad743e8357b91574a6d0bc27401dcb1db732d21f95. Aug 5 22:45:12.379794 containerd[1460]: time="2024-08-05T22:45:12.379577279Z" level=info msg="StartContainer for \"104db00bc7d9f0b20e1003ad743e8357b91574a6d0bc27401dcb1db732d21f95\" returns successfully" Aug 5 22:45:12.677027 kubelet[2622]: I0805 22:45:12.676161 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-62vnl" podStartSLOduration=3.67613891 podStartE2EDuration="3.67613891s" podCreationTimestamp="2024-08-05 22:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:45:10.671352975 +0000 UTC m=+15.289355197" watchObservedRunningTime="2024-08-05 22:45:12.67613891 +0000 UTC m=+17.294141170" Aug 5 22:45:15.519883 kubelet[2622]: I0805 22:45:15.519797 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-ffwlb" podStartSLOduration=4.450472961 podStartE2EDuration="6.519769302s" podCreationTimestamp="2024-08-05 22:45:09 +0000 UTC" firstStartedPulling="2024-08-05 22:45:10.182615453 +0000 UTC m=+14.800617682" lastFinishedPulling="2024-08-05 22:45:12.251911788 +0000 UTC m=+16.869914023" observedRunningTime="2024-08-05 22:45:12.67651217 +0000 UTC m=+17.294514414" watchObservedRunningTime="2024-08-05 22:45:15.519769302 +0000 UTC m=+20.137771552" Aug 5 22:45:15.520593 kubelet[2622]: I0805 22:45:15.520101 2622 topology_manager.go:215] "Topology Admit Handler" podUID="3b406d8f-f2dd-434f-8811-220e268c9174" podNamespace="calico-system" podName="calico-typha-647b7fd5c6-qz87m" Aug 5 22:45:15.536918 systemd[1]: Created slice kubepods-besteffort-pod3b406d8f_f2dd_434f_8811_220e268c9174.slice - libcontainer container kubepods-besteffort-pod3b406d8f_f2dd_434f_8811_220e268c9174.slice. Aug 5 22:45:15.570546 kubelet[2622]: I0805 22:45:15.567788 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b406d8f-f2dd-434f-8811-220e268c9174-tigera-ca-bundle\") pod \"calico-typha-647b7fd5c6-qz87m\" (UID: \"3b406d8f-f2dd-434f-8811-220e268c9174\") " pod="calico-system/calico-typha-647b7fd5c6-qz87m" Aug 5 22:45:15.570546 kubelet[2622]: I0805 22:45:15.568679 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3b406d8f-f2dd-434f-8811-220e268c9174-typha-certs\") pod \"calico-typha-647b7fd5c6-qz87m\" (UID: \"3b406d8f-f2dd-434f-8811-220e268c9174\") " pod="calico-system/calico-typha-647b7fd5c6-qz87m" Aug 5 22:45:15.570546 kubelet[2622]: I0805 22:45:15.568749 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6j5v\" (UniqueName: \"kubernetes.io/projected/3b406d8f-f2dd-434f-8811-220e268c9174-kube-api-access-q6j5v\") pod \"calico-typha-647b7fd5c6-qz87m\" (UID: \"3b406d8f-f2dd-434f-8811-220e268c9174\") " pod="calico-system/calico-typha-647b7fd5c6-qz87m" Aug 5 22:45:15.659119 kubelet[2622]: I0805 22:45:15.659055 2622 topology_manager.go:215] "Topology Admit Handler" podUID="13ab8689-c863-4786-a528-806d2e67ea2c" podNamespace="calico-system" podName="calico-node-mvmws" Aug 5 22:45:15.672386 systemd[1]: Created slice kubepods-besteffort-pod13ab8689_c863_4786_a528_806d2e67ea2c.slice - libcontainer container kubepods-besteffort-pod13ab8689_c863_4786_a528_806d2e67ea2c.slice. Aug 5 22:45:15.772898 kubelet[2622]: I0805 22:45:15.770275 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-run-calico\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.772898 kubelet[2622]: I0805 22:45:15.770331 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-lib-calico\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.772898 kubelet[2622]: I0805 22:45:15.770367 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-policysync\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.772898 kubelet[2622]: I0805 22:45:15.770393 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-net-dir\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.772898 kubelet[2622]: I0805 22:45:15.770422 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-xtables-lock\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.773269 kubelet[2622]: I0805 22:45:15.770526 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13ab8689-c863-4786-a528-806d2e67ea2c-node-certs\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.773269 kubelet[2622]: I0805 22:45:15.770560 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-lib-modules\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.773269 kubelet[2622]: I0805 22:45:15.770589 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ab8689-c863-4786-a528-806d2e67ea2c-tigera-ca-bundle\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.773269 kubelet[2622]: I0805 22:45:15.770632 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-bin-dir\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.773269 kubelet[2622]: I0805 22:45:15.770657 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-log-dir\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.774653 kubelet[2622]: I0805 22:45:15.770683 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-flexvol-driver-host\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.774653 kubelet[2622]: I0805 22:45:15.770713 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2b5p\" (UniqueName: \"kubernetes.io/projected/13ab8689-c863-4786-a528-806d2e67ea2c-kube-api-access-t2b5p\") pod \"calico-node-mvmws\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " pod="calico-system/calico-node-mvmws" Aug 5 22:45:15.774653 kubelet[2622]: I0805 22:45:15.770794 2622 topology_manager.go:215] "Topology Admit Handler" podUID="385bee24-def7-4848-aef0-c366a7421715" podNamespace="calico-system" podName="csi-node-driver-f56wl" Aug 5 22:45:15.774653 kubelet[2622]: E0805 22:45:15.771202 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:15.845721 containerd[1460]: time="2024-08-05T22:45:15.845621510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647b7fd5c6-qz87m,Uid:3b406d8f-f2dd-434f-8811-220e268c9174,Namespace:calico-system,Attempt:0,}" Aug 5 22:45:15.874505 kubelet[2622]: I0805 22:45:15.871766 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/385bee24-def7-4848-aef0-c366a7421715-registration-dir\") pod \"csi-node-driver-f56wl\" (UID: \"385bee24-def7-4848-aef0-c366a7421715\") " pod="calico-system/csi-node-driver-f56wl" Aug 5 22:45:15.874505 kubelet[2622]: I0805 22:45:15.871820 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/385bee24-def7-4848-aef0-c366a7421715-kubelet-dir\") pod \"csi-node-driver-f56wl\" (UID: \"385bee24-def7-4848-aef0-c366a7421715\") " pod="calico-system/csi-node-driver-f56wl" Aug 5 22:45:15.874505 kubelet[2622]: I0805 22:45:15.871878 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/385bee24-def7-4848-aef0-c366a7421715-varrun\") pod \"csi-node-driver-f56wl\" (UID: \"385bee24-def7-4848-aef0-c366a7421715\") " pod="calico-system/csi-node-driver-f56wl" Aug 5 22:45:15.874505 kubelet[2622]: I0805 22:45:15.871915 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlnpm\" (UniqueName: \"kubernetes.io/projected/385bee24-def7-4848-aef0-c366a7421715-kube-api-access-vlnpm\") pod \"csi-node-driver-f56wl\" (UID: \"385bee24-def7-4848-aef0-c366a7421715\") " pod="calico-system/csi-node-driver-f56wl" Aug 5 22:45:15.874505 kubelet[2622]: I0805 22:45:15.872088 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/385bee24-def7-4848-aef0-c366a7421715-socket-dir\") pod \"csi-node-driver-f56wl\" (UID: \"385bee24-def7-4848-aef0-c366a7421715\") " pod="calico-system/csi-node-driver-f56wl" Aug 5 22:45:15.881790 kubelet[2622]: E0805 22:45:15.881549 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.881790 kubelet[2622]: W0805 22:45:15.881587 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.881790 kubelet[2622]: E0805 22:45:15.881636 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.884501 kubelet[2622]: E0805 22:45:15.884132 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.884501 kubelet[2622]: W0805 22:45:15.884160 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.884501 kubelet[2622]: E0805 22:45:15.884379 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.889663 kubelet[2622]: E0805 22:45:15.889614 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.889663 kubelet[2622]: W0805 22:45:15.889649 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.889663 kubelet[2622]: E0805 22:45:15.889688 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.900150 kubelet[2622]: E0805 22:45:15.897017 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.900150 kubelet[2622]: W0805 22:45:15.897045 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.901074 kubelet[2622]: E0805 22:45:15.901033 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.901074 kubelet[2622]: W0805 22:45:15.901067 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.902869 kubelet[2622]: E0805 22:45:15.902657 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.902869 kubelet[2622]: W0805 22:45:15.902683 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.908027 kubelet[2622]: E0805 22:45:15.903433 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.908027 kubelet[2622]: W0805 22:45:15.903454 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.908027 kubelet[2622]: E0805 22:45:15.904276 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.908027 kubelet[2622]: W0805 22:45:15.904292 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.908027 kubelet[2622]: E0805 22:45:15.906524 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.908027 kubelet[2622]: W0805 22:45:15.906542 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.908027 kubelet[2622]: E0805 22:45:15.907606 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.908511 kubelet[2622]: W0805 22:45:15.907624 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.908597 kubelet[2622]: E0805 22:45:15.908533 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.908597 kubelet[2622]: E0805 22:45:15.908587 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.908697 kubelet[2622]: E0805 22:45:15.908609 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.908697 kubelet[2622]: E0805 22:45:15.908634 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.910986 kubelet[2622]: E0805 22:45:15.910264 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.910986 kubelet[2622]: W0805 22:45:15.910287 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.910986 kubelet[2622]: E0805 22:45:15.910425 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.911216 kubelet[2622]: E0805 22:45:15.910993 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.911216 kubelet[2622]: W0805 22:45:15.911010 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.911216 kubelet[2622]: E0805 22:45:15.911036 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.913062 kubelet[2622]: E0805 22:45:15.912633 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.913062 kubelet[2622]: W0805 22:45:15.912763 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.913062 kubelet[2622]: E0805 22:45:15.912783 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.913281 kubelet[2622]: E0805 22:45:15.913195 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.920627 kubelet[2622]: E0805 22:45:15.920590 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.920627 kubelet[2622]: W0805 22:45:15.920624 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.920873 kubelet[2622]: E0805 22:45:15.920654 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.925652 kubelet[2622]: E0805 22:45:15.921750 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.925652 kubelet[2622]: E0805 22:45:15.921783 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.925652 kubelet[2622]: E0805 22:45:15.923399 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.925652 kubelet[2622]: W0805 22:45:15.923418 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.925652 kubelet[2622]: E0805 22:45:15.924520 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.925652 kubelet[2622]: W0805 22:45:15.924539 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.925652 kubelet[2622]: E0805 22:45:15.924563 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.925652 kubelet[2622]: E0805 22:45:15.924919 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.925652 kubelet[2622]: W0805 22:45:15.924933 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.925652 kubelet[2622]: E0805 22:45:15.924951 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.926247 kubelet[2622]: E0805 22:45:15.925343 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.929755 kubelet[2622]: E0805 22:45:15.927786 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.929755 kubelet[2622]: W0805 22:45:15.927808 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.929755 kubelet[2622]: E0805 22:45:15.927878 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.931145 kubelet[2622]: E0805 22:45:15.930664 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.931145 kubelet[2622]: W0805 22:45:15.930691 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.931145 kubelet[2622]: E0805 22:45:15.930720 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.933148 kubelet[2622]: E0805 22:45:15.932623 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.933148 kubelet[2622]: W0805 22:45:15.932667 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.933148 kubelet[2622]: E0805 22:45:15.932690 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.933148 kubelet[2622]: E0805 22:45:15.933116 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.934100 kubelet[2622]: W0805 22:45:15.933750 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.934100 kubelet[2622]: E0805 22:45:15.933781 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.942631 containerd[1460]: time="2024-08-05T22:45:15.941745537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:15.942631 containerd[1460]: time="2024-08-05T22:45:15.941836655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:15.942631 containerd[1460]: time="2024-08-05T22:45:15.941871910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:15.942631 containerd[1460]: time="2024-08-05T22:45:15.941913890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:15.972843 kubelet[2622]: E0805 22:45:15.972798 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.972843 kubelet[2622]: W0805 22:45:15.972829 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.973359 kubelet[2622]: E0805 22:45:15.972858 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.975985 kubelet[2622]: E0805 22:45:15.975726 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.975985 kubelet[2622]: W0805 22:45:15.975754 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.975985 kubelet[2622]: E0805 22:45:15.975784 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.977373 kubelet[2622]: E0805 22:45:15.976992 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.977373 kubelet[2622]: W0805 22:45:15.977015 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.977373 kubelet[2622]: E0805 22:45:15.977222 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.980572 kubelet[2622]: E0805 22:45:15.979842 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.980572 kubelet[2622]: W0805 22:45:15.980247 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.980572 kubelet[2622]: E0805 22:45:15.980407 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.980619 systemd[1]: Started cri-containerd-949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316.scope - libcontainer container 949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316. Aug 5 22:45:15.981680 kubelet[2622]: E0805 22:45:15.981259 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.981680 kubelet[2622]: W0805 22:45:15.981278 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.981680 kubelet[2622]: E0805 22:45:15.981335 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.983268 kubelet[2622]: E0805 22:45:15.982859 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.983268 kubelet[2622]: W0805 22:45:15.982880 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.983268 kubelet[2622]: E0805 22:45:15.982949 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.984378 kubelet[2622]: E0805 22:45:15.984107 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.984378 kubelet[2622]: W0805 22:45:15.984153 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.984378 kubelet[2622]: E0805 22:45:15.984221 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.984840 kubelet[2622]: E0805 22:45:15.984692 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.984840 kubelet[2622]: W0805 22:45:15.984709 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.985188 kubelet[2622]: E0805 22:45:15.985055 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.985614 kubelet[2622]: E0805 22:45:15.985584 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.985833 kubelet[2622]: W0805 22:45:15.985724 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.985833 kubelet[2622]: E0805 22:45:15.985800 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.987012 kubelet[2622]: E0805 22:45:15.986403 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.987012 kubelet[2622]: W0805 22:45:15.986438 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.987012 kubelet[2622]: E0805 22:45:15.986498 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.987601 containerd[1460]: time="2024-08-05T22:45:15.987560053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvmws,Uid:13ab8689-c863-4786-a528-806d2e67ea2c,Namespace:calico-system,Attempt:0,}" Aug 5 22:45:15.989248 kubelet[2622]: E0805 22:45:15.988951 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.989248 kubelet[2622]: W0805 22:45:15.988972 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.989772 kubelet[2622]: E0805 22:45:15.989554 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.989772 kubelet[2622]: W0805 22:45:15.989571 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.990597 kubelet[2622]: E0805 22:45:15.990093 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.990597 kubelet[2622]: W0805 22:45:15.990111 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.991122 kubelet[2622]: E0805 22:45:15.990892 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.991122 kubelet[2622]: E0805 22:45:15.990942 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.991122 kubelet[2622]: E0805 22:45:15.990956 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.991557 kubelet[2622]: E0805 22:45:15.991396 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.991557 kubelet[2622]: W0805 22:45:15.991422 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.991557 kubelet[2622]: E0805 22:45:15.991520 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.992287 kubelet[2622]: E0805 22:45:15.992271 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.992475 kubelet[2622]: W0805 22:45:15.992385 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.992722 kubelet[2622]: E0805 22:45:15.992587 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.993648 kubelet[2622]: E0805 22:45:15.993525 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.993944 kubelet[2622]: W0805 22:45:15.993545 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.994171 kubelet[2622]: E0805 22:45:15.994054 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.995972 kubelet[2622]: E0805 22:45:15.995830 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.995972 kubelet[2622]: W0805 22:45:15.995857 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.997171 kubelet[2622]: E0805 22:45:15.996851 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.997171 kubelet[2622]: E0805 22:45:15.997028 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.997171 kubelet[2622]: W0805 22:45:15.997041 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.997820 kubelet[2622]: E0805 22:45:15.997711 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.998017 kubelet[2622]: E0805 22:45:15.997970 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.998017 kubelet[2622]: W0805 22:45:15.997987 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.998388 kubelet[2622]: E0805 22:45:15.998237 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.999255 kubelet[2622]: E0805 22:45:15.998922 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.999255 kubelet[2622]: W0805 22:45:15.998956 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:15.999457 kubelet[2622]: E0805 22:45:15.999422 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:15.999688 kubelet[2622]: E0805 22:45:15.999672 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:15.999962 kubelet[2622]: W0805 22:45:15.999795 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:16.000229 kubelet[2622]: E0805 22:45:16.000177 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:16.002257 kubelet[2622]: E0805 22:45:16.002235 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:16.002723 kubelet[2622]: W0805 22:45:16.002454 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:16.003028 kubelet[2622]: E0805 22:45:16.002876 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:16.005203 kubelet[2622]: E0805 22:45:16.005180 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:16.005607 kubelet[2622]: W0805 22:45:16.005579 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:16.005960 kubelet[2622]: E0805 22:45:16.005935 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:16.007455 kubelet[2622]: E0805 22:45:16.007434 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:16.007707 kubelet[2622]: W0805 22:45:16.007639 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:16.007847 kubelet[2622]: E0805 22:45:16.007830 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:16.008861 kubelet[2622]: E0805 22:45:16.008841 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:16.009042 kubelet[2622]: W0805 22:45:16.009024 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:16.009191 kubelet[2622]: E0805 22:45:16.009173 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:16.036690 kubelet[2622]: E0805 22:45:16.036524 2622 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:45:16.036690 kubelet[2622]: W0805 22:45:16.036555 2622 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:45:16.036690 kubelet[2622]: E0805 22:45:16.036583 2622 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:45:16.073317 containerd[1460]: time="2024-08-05T22:45:16.073100017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:16.073317 containerd[1460]: time="2024-08-05T22:45:16.073196677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:16.073317 containerd[1460]: time="2024-08-05T22:45:16.073227927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:16.073317 containerd[1460]: time="2024-08-05T22:45:16.073252494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:16.124779 systemd[1]: Started cri-containerd-2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088.scope - libcontainer container 2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088. Aug 5 22:45:16.156564 containerd[1460]: time="2024-08-05T22:45:16.156397086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-647b7fd5c6-qz87m,Uid:3b406d8f-f2dd-434f-8811-220e268c9174,Namespace:calico-system,Attempt:0,} returns sandbox id \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\"" Aug 5 22:45:16.160345 containerd[1460]: time="2024-08-05T22:45:16.159997905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:45:16.199497 containerd[1460]: time="2024-08-05T22:45:16.199419322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mvmws,Uid:13ab8689-c863-4786-a528-806d2e67ea2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\"" Aug 5 22:45:17.559407 kubelet[2622]: E0805 22:45:17.559208 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:18.663073 containerd[1460]: time="2024-08-05T22:45:18.662999174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:18.666435 containerd[1460]: time="2024-08-05T22:45:18.666366652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:45:18.669534 containerd[1460]: time="2024-08-05T22:45:18.668335714Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:18.675895 containerd[1460]: time="2024-08-05T22:45:18.675716061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:18.677439 containerd[1460]: time="2024-08-05T22:45:18.677383797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.517331148s" Aug 5 22:45:18.677844 containerd[1460]: time="2024-08-05T22:45:18.677810939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:45:18.681444 containerd[1460]: time="2024-08-05T22:45:18.681105369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:45:18.713158 containerd[1460]: time="2024-08-05T22:45:18.713106157Z" level=info msg="CreateContainer within sandbox \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:45:18.748114 containerd[1460]: time="2024-08-05T22:45:18.747985317Z" level=info msg="CreateContainer within sandbox \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\"" Aug 5 22:45:18.749228 containerd[1460]: time="2024-08-05T22:45:18.749174086Z" level=info msg="StartContainer for \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\"" Aug 5 22:45:18.823835 systemd[1]: Started cri-containerd-58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626.scope - libcontainer container 58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626. Aug 5 22:45:18.946593 containerd[1460]: time="2024-08-05T22:45:18.944634852Z" level=info msg="StartContainer for \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\" returns successfully" Aug 5 22:45:19.556752 kubelet[2622]: E0805 22:45:19.555998 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:19.712157 containerd[1460]: time="2024-08-05T22:45:19.710490177Z" level=info msg="StopContainer for \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\" with timeout 300 (s)" Aug 5 22:45:19.712157 containerd[1460]: time="2024-08-05T22:45:19.711071900Z" level=info msg="Stop container \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\" with signal terminated" Aug 5 22:45:19.819001 kubelet[2622]: I0805 22:45:19.817534 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-647b7fd5c6-qz87m" podStartSLOduration=2.296987008 podStartE2EDuration="4.817505398s" podCreationTimestamp="2024-08-05 22:45:15 +0000 UTC" firstStartedPulling="2024-08-05 22:45:16.159585442 +0000 UTC m=+20.777587674" lastFinishedPulling="2024-08-05 22:45:18.68010383 +0000 UTC m=+23.298106064" observedRunningTime="2024-08-05 22:45:19.771930626 +0000 UTC m=+24.389932869" watchObservedRunningTime="2024-08-05 22:45:19.817505398 +0000 UTC m=+24.435507643" Aug 5 22:45:19.867868 systemd[1]: cri-containerd-58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626.scope: Deactivated successfully. Aug 5 22:45:19.885091 containerd[1460]: time="2024-08-05T22:45:19.884273794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:19.886570 containerd[1460]: time="2024-08-05T22:45:19.886431236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:45:19.887648 containerd[1460]: time="2024-08-05T22:45:19.887578100Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:19.897579 containerd[1460]: time="2024-08-05T22:45:19.897491433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:19.901926 containerd[1460]: time="2024-08-05T22:45:19.900411739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.219255847s" Aug 5 22:45:19.901926 containerd[1460]: time="2024-08-05T22:45:19.900509663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:45:19.906208 containerd[1460]: time="2024-08-05T22:45:19.906130394Z" level=info msg="CreateContainer within sandbox \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:45:19.944671 containerd[1460]: time="2024-08-05T22:45:19.944618798Z" level=info msg="CreateContainer within sandbox \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe\"" Aug 5 22:45:19.947680 containerd[1460]: time="2024-08-05T22:45:19.946718750Z" level=info msg="StartContainer for \"2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe\"" Aug 5 22:45:19.967343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626-rootfs.mount: Deactivated successfully. Aug 5 22:45:20.052143 systemd[1]: Started cri-containerd-2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe.scope - libcontainer container 2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe. Aug 5 22:45:20.191000 containerd[1460]: time="2024-08-05T22:45:20.190928577Z" level=info msg="StartContainer for \"2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe\" returns successfully" Aug 5 22:45:20.202438 systemd[1]: cri-containerd-2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe.scope: Deactivated successfully. Aug 5 22:45:20.259988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe-rootfs.mount: Deactivated successfully. Aug 5 22:45:20.526819 containerd[1460]: time="2024-08-05T22:45:20.526607611Z" level=info msg="shim disconnected" id=58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626 namespace=k8s.io Aug 5 22:45:20.526819 containerd[1460]: time="2024-08-05T22:45:20.526695298Z" level=warning msg="cleaning up after shim disconnected" id=58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626 namespace=k8s.io Aug 5 22:45:20.526819 containerd[1460]: time="2024-08-05T22:45:20.526710296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:45:20.531485 containerd[1460]: time="2024-08-05T22:45:20.530756718Z" level=info msg="shim disconnected" id=2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe namespace=k8s.io Aug 5 22:45:20.531485 containerd[1460]: time="2024-08-05T22:45:20.530846116Z" level=warning msg="cleaning up after shim disconnected" id=2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe namespace=k8s.io Aug 5 22:45:20.531485 containerd[1460]: time="2024-08-05T22:45:20.530861619Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:45:20.565796 containerd[1460]: time="2024-08-05T22:45:20.565736457Z" level=info msg="StopContainer for \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\" returns successfully" Aug 5 22:45:20.567316 containerd[1460]: time="2024-08-05T22:45:20.567260788Z" level=info msg="StopPodSandbox for \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\"" Aug 5 22:45:20.567498 containerd[1460]: time="2024-08-05T22:45:20.567326953Z" level=info msg="Container to stop \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:45:20.580769 systemd[1]: cri-containerd-949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316.scope: Deactivated successfully. Aug 5 22:45:20.622884 containerd[1460]: time="2024-08-05T22:45:20.622703893Z" level=info msg="shim disconnected" id=949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316 namespace=k8s.io Aug 5 22:45:20.622884 containerd[1460]: time="2024-08-05T22:45:20.622868373Z" level=warning msg="cleaning up after shim disconnected" id=949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316 namespace=k8s.io Aug 5 22:45:20.622884 containerd[1460]: time="2024-08-05T22:45:20.622896732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:45:20.648375 containerd[1460]: time="2024-08-05T22:45:20.648270302Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:45:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:45:20.650270 containerd[1460]: time="2024-08-05T22:45:20.650222774Z" level=info msg="TearDown network for sandbox \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" successfully" Aug 5 22:45:20.650754 containerd[1460]: time="2024-08-05T22:45:20.650507932Z" level=info msg="StopPodSandbox for \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" returns successfully" Aug 5 22:45:20.684556 kubelet[2622]: I0805 22:45:20.684498 2622 topology_manager.go:215] "Topology Admit Handler" podUID="8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e" podNamespace="calico-system" podName="calico-typha-6dfc6bf8dd-rfwmg" Aug 5 22:45:20.686746 kubelet[2622]: E0805 22:45:20.684584 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3b406d8f-f2dd-434f-8811-220e268c9174" containerName="calico-typha" Aug 5 22:45:20.686746 kubelet[2622]: I0805 22:45:20.684623 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b406d8f-f2dd-434f-8811-220e268c9174" containerName="calico-typha" Aug 5 22:45:20.699299 systemd[1]: Created slice kubepods-besteffort-pod8f6bc9ee_60f0_47fc_bb1a_e84a2359b01e.slice - libcontainer container kubepods-besteffort-pod8f6bc9ee_60f0_47fc_bb1a_e84a2359b01e.slice. Aug 5 22:45:20.717509 containerd[1460]: time="2024-08-05T22:45:20.717320621Z" level=info msg="StopPodSandbox for \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\"" Aug 5 22:45:20.719549 containerd[1460]: time="2024-08-05T22:45:20.717396952Z" level=info msg="Container to stop \"2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:45:20.730485 kubelet[2622]: I0805 22:45:20.730429 2622 scope.go:117] "RemoveContainer" containerID="58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626" Aug 5 22:45:20.734051 kubelet[2622]: I0805 22:45:20.733664 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e-tigera-ca-bundle\") pod \"calico-typha-6dfc6bf8dd-rfwmg\" (UID: \"8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e\") " pod="calico-system/calico-typha-6dfc6bf8dd-rfwmg" Aug 5 22:45:20.734051 kubelet[2622]: I0805 22:45:20.733724 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e-typha-certs\") pod \"calico-typha-6dfc6bf8dd-rfwmg\" (UID: \"8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e\") " pod="calico-system/calico-typha-6dfc6bf8dd-rfwmg" Aug 5 22:45:20.740317 containerd[1460]: time="2024-08-05T22:45:20.740028912Z" level=info msg="RemoveContainer for \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\"" Aug 5 22:45:20.762550 containerd[1460]: time="2024-08-05T22:45:20.761352618Z" level=info msg="RemoveContainer for \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\" returns successfully" Aug 5 22:45:20.763420 kubelet[2622]: I0805 22:45:20.763385 2622 scope.go:117] "RemoveContainer" containerID="58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626" Aug 5 22:45:20.764376 systemd[1]: cri-containerd-2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088.scope: Deactivated successfully. Aug 5 22:45:20.767577 containerd[1460]: time="2024-08-05T22:45:20.766937567Z" level=error msg="ContainerStatus for \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\": not found" Aug 5 22:45:20.767769 kubelet[2622]: E0805 22:45:20.767232 2622 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\": not found" containerID="58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626" Aug 5 22:45:20.767769 kubelet[2622]: I0805 22:45:20.767282 2622 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626"} err="failed to get container status \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\": rpc error: code = NotFound desc = an error occurred when try to find container \"58d8aa0e994b82fb7e8ab0b0f19436654824bf954af1a31cbb8c9be4e3b35626\": not found" Aug 5 22:45:20.818690 containerd[1460]: time="2024-08-05T22:45:20.818511066Z" level=info msg="shim disconnected" id=2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088 namespace=k8s.io Aug 5 22:45:20.819479 containerd[1460]: time="2024-08-05T22:45:20.818998360Z" level=warning msg="cleaning up after shim disconnected" id=2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088 namespace=k8s.io Aug 5 22:45:20.819479 containerd[1460]: time="2024-08-05T22:45:20.819035066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:45:20.837346 kubelet[2622]: I0805 22:45:20.835665 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6j5v\" (UniqueName: \"kubernetes.io/projected/3b406d8f-f2dd-434f-8811-220e268c9174-kube-api-access-q6j5v\") pod \"3b406d8f-f2dd-434f-8811-220e268c9174\" (UID: \"3b406d8f-f2dd-434f-8811-220e268c9174\") " Aug 5 22:45:20.837346 kubelet[2622]: I0805 22:45:20.836611 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b406d8f-f2dd-434f-8811-220e268c9174-tigera-ca-bundle\") pod \"3b406d8f-f2dd-434f-8811-220e268c9174\" (UID: \"3b406d8f-f2dd-434f-8811-220e268c9174\") " Aug 5 22:45:20.837346 kubelet[2622]: I0805 22:45:20.836661 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3b406d8f-f2dd-434f-8811-220e268c9174-typha-certs\") pod \"3b406d8f-f2dd-434f-8811-220e268c9174\" (UID: \"3b406d8f-f2dd-434f-8811-220e268c9174\") " Aug 5 22:45:20.837346 kubelet[2622]: I0805 22:45:20.836754 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp2xw\" (UniqueName: \"kubernetes.io/projected/8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e-kube-api-access-cp2xw\") pod \"calico-typha-6dfc6bf8dd-rfwmg\" (UID: \"8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e\") " pod="calico-system/calico-typha-6dfc6bf8dd-rfwmg" Aug 5 22:45:20.850819 kubelet[2622]: I0805 22:45:20.850761 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b406d8f-f2dd-434f-8811-220e268c9174-kube-api-access-q6j5v" (OuterVolumeSpecName: "kube-api-access-q6j5v") pod "3b406d8f-f2dd-434f-8811-220e268c9174" (UID: "3b406d8f-f2dd-434f-8811-220e268c9174"). InnerVolumeSpecName "kube-api-access-q6j5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:45:20.854326 kubelet[2622]: I0805 22:45:20.853988 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b406d8f-f2dd-434f-8811-220e268c9174-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "3b406d8f-f2dd-434f-8811-220e268c9174" (UID: "3b406d8f-f2dd-434f-8811-220e268c9174"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:45:20.862384 kubelet[2622]: I0805 22:45:20.862217 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b406d8f-f2dd-434f-8811-220e268c9174-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "3b406d8f-f2dd-434f-8811-220e268c9174" (UID: "3b406d8f-f2dd-434f-8811-220e268c9174"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:45:20.868843 containerd[1460]: time="2024-08-05T22:45:20.868749972Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:45:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:45:20.871087 containerd[1460]: time="2024-08-05T22:45:20.870954808Z" level=info msg="TearDown network for sandbox \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" successfully" Aug 5 22:45:20.871087 containerd[1460]: time="2024-08-05T22:45:20.871017422Z" level=info msg="StopPodSandbox for \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" returns successfully" Aug 5 22:45:20.939796 kubelet[2622]: I0805 22:45:20.939558 2622 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-q6j5v\" (UniqueName: \"kubernetes.io/projected/3b406d8f-f2dd-434f-8811-220e268c9174-kube-api-access-q6j5v\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:20.939796 kubelet[2622]: I0805 22:45:20.939612 2622 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b406d8f-f2dd-434f-8811-220e268c9174-tigera-ca-bundle\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:20.939796 kubelet[2622]: I0805 22:45:20.939648 2622 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3b406d8f-f2dd-434f-8811-220e268c9174-typha-certs\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:20.941569 systemd[1]: var-lib-kubelet-pods-3b406d8f\x2df2dd\x2d434f\x2d8811\x2d220e268c9174-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Aug 5 22:45:20.941717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088-rootfs.mount: Deactivated successfully. Aug 5 22:45:20.941825 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088-shm.mount: Deactivated successfully. Aug 5 22:45:20.941926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316-rootfs.mount: Deactivated successfully. Aug 5 22:45:20.942033 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316-shm.mount: Deactivated successfully. Aug 5 22:45:20.942128 systemd[1]: var-lib-kubelet-pods-3b406d8f\x2df2dd\x2d434f\x2d8811\x2d220e268c9174-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6j5v.mount: Deactivated successfully. Aug 5 22:45:20.942228 systemd[1]: var-lib-kubelet-pods-3b406d8f\x2df2dd\x2d434f\x2d8811\x2d220e268c9174-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Aug 5 22:45:21.010344 containerd[1460]: time="2024-08-05T22:45:21.009706782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dfc6bf8dd-rfwmg,Uid:8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e,Namespace:calico-system,Attempt:0,}" Aug 5 22:45:21.041905 kubelet[2622]: I0805 22:45:21.041618 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-xtables-lock\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.042301 kubelet[2622]: I0805 22:45:21.042098 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.042387 kubelet[2622]: I0805 22:45:21.042307 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13ab8689-c863-4786-a528-806d2e67ea2c-node-certs\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.043494 kubelet[2622]: I0805 22:45:21.042350 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2b5p\" (UniqueName: \"kubernetes.io/projected/13ab8689-c863-4786-a528-806d2e67ea2c-kube-api-access-t2b5p\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.043494 kubelet[2622]: I0805 22:45:21.042513 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-net-dir\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.043851 kubelet[2622]: I0805 22:45:21.043824 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-flexvol-driver-host\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044576 kubelet[2622]: I0805 22:45:21.043982 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-run-calico\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044576 kubelet[2622]: I0805 22:45:21.044019 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-policysync\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044576 kubelet[2622]: I0805 22:45:21.044042 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-bin-dir\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044576 kubelet[2622]: I0805 22:45:21.044073 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-lib-modules\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044576 kubelet[2622]: I0805 22:45:21.044097 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-lib-calico\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044576 kubelet[2622]: I0805 22:45:21.044123 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-log-dir\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044953 kubelet[2622]: I0805 22:45:21.044158 2622 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ab8689-c863-4786-a528-806d2e67ea2c-tigera-ca-bundle\") pod \"13ab8689-c863-4786-a528-806d2e67ea2c\" (UID: \"13ab8689-c863-4786-a528-806d2e67ea2c\") " Aug 5 22:45:21.044953 kubelet[2622]: I0805 22:45:21.044225 2622 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-xtables-lock\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.050429 kubelet[2622]: I0805 22:45:21.047628 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.050429 kubelet[2622]: I0805 22:45:21.048018 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13ab8689-c863-4786-a528-806d2e67ea2c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:45:21.050429 kubelet[2622]: I0805 22:45:21.048077 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-policysync" (OuterVolumeSpecName: "policysync") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.050429 kubelet[2622]: I0805 22:45:21.048102 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.050429 kubelet[2622]: I0805 22:45:21.048124 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.050836 kubelet[2622]: I0805 22:45:21.048359 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.050836 kubelet[2622]: I0805 22:45:21.048422 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.051245 kubelet[2622]: I0805 22:45:21.051211 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.051984 kubelet[2622]: I0805 22:45:21.051952 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:45:21.054919 systemd[1]: Removed slice kubepods-besteffort-pod3b406d8f_f2dd_434f_8811_220e268c9174.slice - libcontainer container kubepods-besteffort-pod3b406d8f_f2dd_434f_8811_220e268c9174.slice. Aug 5 22:45:21.062306 systemd[1]: var-lib-kubelet-pods-13ab8689\x2dc863\x2d4786\x2da528\x2d806d2e67ea2c-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Aug 5 22:45:21.066271 kubelet[2622]: I0805 22:45:21.066215 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13ab8689-c863-4786-a528-806d2e67ea2c-node-certs" (OuterVolumeSpecName: "node-certs") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:45:21.078372 systemd[1]: var-lib-kubelet-pods-13ab8689\x2dc863\x2d4786\x2da528\x2d806d2e67ea2c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt2b5p.mount: Deactivated successfully. Aug 5 22:45:21.085928 kubelet[2622]: I0805 22:45:21.085797 2622 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ab8689-c863-4786-a528-806d2e67ea2c-kube-api-access-t2b5p" (OuterVolumeSpecName: "kube-api-access-t2b5p") pod "13ab8689-c863-4786-a528-806d2e67ea2c" (UID: "13ab8689-c863-4786-a528-806d2e67ea2c"). InnerVolumeSpecName "kube-api-access-t2b5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:45:21.098491 containerd[1460]: time="2024-08-05T22:45:21.098199036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:21.098693 containerd[1460]: time="2024-08-05T22:45:21.098587661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:21.100280 containerd[1460]: time="2024-08-05T22:45:21.098690520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:21.100280 containerd[1460]: time="2024-08-05T22:45:21.098850095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:21.128858 systemd[1]: Started cri-containerd-d41bffedbeb53f4cf52a8efedc66424583ba9e75a91e617f4f0d381b4c6d6b36.scope - libcontainer container d41bffedbeb53f4cf52a8efedc66424583ba9e75a91e617f4f0d381b4c6d6b36. Aug 5 22:45:21.144886 kubelet[2622]: I0805 22:45:21.144602 2622 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-policysync\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.144886 kubelet[2622]: I0805 22:45:21.144650 2622 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-bin-dir\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.144886 kubelet[2622]: I0805 22:45:21.144671 2622 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-lib-modules\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.144886 kubelet[2622]: I0805 22:45:21.144687 2622 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-lib-calico\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.144886 kubelet[2622]: I0805 22:45:21.144705 2622 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-log-dir\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.144886 kubelet[2622]: I0805 22:45:21.144723 2622 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13ab8689-c863-4786-a528-806d2e67ea2c-tigera-ca-bundle\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.144886 kubelet[2622]: I0805 22:45:21.144741 2622 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13ab8689-c863-4786-a528-806d2e67ea2c-node-certs\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.145449 kubelet[2622]: I0805 22:45:21.144756 2622 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-cni-net-dir\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.145449 kubelet[2622]: I0805 22:45:21.144787 2622 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-flexvol-driver-host\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.145449 kubelet[2622]: I0805 22:45:21.144819 2622 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t2b5p\" (UniqueName: \"kubernetes.io/projected/13ab8689-c863-4786-a528-806d2e67ea2c-kube-api-access-t2b5p\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.145449 kubelet[2622]: I0805 22:45:21.144836 2622 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13ab8689-c863-4786-a528-806d2e67ea2c-var-run-calico\") on node \"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal\" DevicePath \"\"" Aug 5 22:45:21.205804 containerd[1460]: time="2024-08-05T22:45:21.205723733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6dfc6bf8dd-rfwmg,Uid:8f6bc9ee-60f0-47fc-bb1a-e84a2359b01e,Namespace:calico-system,Attempt:0,} returns sandbox id \"d41bffedbeb53f4cf52a8efedc66424583ba9e75a91e617f4f0d381b4c6d6b36\"" Aug 5 22:45:21.225179 containerd[1460]: time="2024-08-05T22:45:21.225083244Z" level=info msg="CreateContainer within sandbox \"d41bffedbeb53f4cf52a8efedc66424583ba9e75a91e617f4f0d381b4c6d6b36\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:45:21.250337 containerd[1460]: time="2024-08-05T22:45:21.250158997Z" level=info msg="CreateContainer within sandbox \"d41bffedbeb53f4cf52a8efedc66424583ba9e75a91e617f4f0d381b4c6d6b36\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a120302a3e4755b39ccecd0233877300442b8d8d4a722e24b8ee11b0b3ccf77a\"" Aug 5 22:45:21.251123 containerd[1460]: time="2024-08-05T22:45:21.251059718Z" level=info msg="StartContainer for \"a120302a3e4755b39ccecd0233877300442b8d8d4a722e24b8ee11b0b3ccf77a\"" Aug 5 22:45:21.295740 systemd[1]: Started cri-containerd-a120302a3e4755b39ccecd0233877300442b8d8d4a722e24b8ee11b0b3ccf77a.scope - libcontainer container a120302a3e4755b39ccecd0233877300442b8d8d4a722e24b8ee11b0b3ccf77a. Aug 5 22:45:21.380976 containerd[1460]: time="2024-08-05T22:45:21.380817932Z" level=info msg="StartContainer for \"a120302a3e4755b39ccecd0233877300442b8d8d4a722e24b8ee11b0b3ccf77a\" returns successfully" Aug 5 22:45:21.555664 kubelet[2622]: E0805 22:45:21.555518 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:21.564436 kubelet[2622]: I0805 22:45:21.563053 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b406d8f-f2dd-434f-8811-220e268c9174" path="/var/lib/kubelet/pods/3b406d8f-f2dd-434f-8811-220e268c9174/volumes" Aug 5 22:45:21.579595 systemd[1]: Removed slice kubepods-besteffort-pod13ab8689_c863_4786_a528_806d2e67ea2c.slice - libcontainer container kubepods-besteffort-pod13ab8689_c863_4786_a528_806d2e67ea2c.slice. Aug 5 22:45:21.735323 kubelet[2622]: I0805 22:45:21.735182 2622 scope.go:117] "RemoveContainer" containerID="2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe" Aug 5 22:45:21.743788 containerd[1460]: time="2024-08-05T22:45:21.743565119Z" level=info msg="RemoveContainer for \"2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe\"" Aug 5 22:45:21.750535 containerd[1460]: time="2024-08-05T22:45:21.749085940Z" level=info msg="RemoveContainer for \"2d2a5c1eada69460369fe15343175b41d5d9ac88261527dab52348de5455cefe\" returns successfully" Aug 5 22:45:21.792494 kubelet[2622]: I0805 22:45:21.792410 2622 topology_manager.go:215] "Topology Admit Handler" podUID="93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b" podNamespace="calico-system" podName="calico-node-xbrg9" Aug 5 22:45:21.792685 kubelet[2622]: E0805 22:45:21.792569 2622 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13ab8689-c863-4786-a528-806d2e67ea2c" containerName="flexvol-driver" Aug 5 22:45:21.792685 kubelet[2622]: I0805 22:45:21.792616 2622 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ab8689-c863-4786-a528-806d2e67ea2c" containerName="flexvol-driver" Aug 5 22:45:21.807342 systemd[1]: Created slice kubepods-besteffort-pod93e1ef2f_0cc3_47c7_a8f5_fe188a75ec3b.slice - libcontainer container kubepods-besteffort-pod93e1ef2f_0cc3_47c7_a8f5_fe188a75ec3b.slice. Aug 5 22:45:21.810279 kubelet[2622]: W0805 22:45:21.810231 2622 reflector.go:547] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal' and this object Aug 5 22:45:21.810411 kubelet[2622]: E0805 22:45:21.810284 2622 reflector.go:150] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal' and this object Aug 5 22:45:21.840941 kubelet[2622]: I0805 22:45:21.840869 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6dfc6bf8dd-rfwmg" podStartSLOduration=5.840840944 podStartE2EDuration="5.840840944s" podCreationTimestamp="2024-08-05 22:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:45:21.814073822 +0000 UTC m=+26.432076066" watchObservedRunningTime="2024-08-05 22:45:21.840840944 +0000 UTC m=+26.458843186" Aug 5 22:45:21.851559 kubelet[2622]: I0805 22:45:21.851501 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-xtables-lock\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.851559 kubelet[2622]: I0805 22:45:21.851571 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-var-run-calico\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.851844 kubelet[2622]: I0805 22:45:21.851599 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-flexvol-driver-host\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.851844 kubelet[2622]: I0805 22:45:21.851650 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-policysync\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.851844 kubelet[2622]: I0805 22:45:21.851676 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-cni-log-dir\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.851844 kubelet[2622]: I0805 22:45:21.851701 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-node-certs\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.851844 kubelet[2622]: I0805 22:45:21.851728 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmw4x\" (UniqueName: \"kubernetes.io/projected/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-kube-api-access-xmw4x\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.852136 kubelet[2622]: I0805 22:45:21.851759 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-lib-modules\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.852136 kubelet[2622]: I0805 22:45:21.851788 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-cni-net-dir\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.852136 kubelet[2622]: I0805 22:45:21.851821 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-var-lib-calico\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.852136 kubelet[2622]: I0805 22:45:21.851847 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-tigera-ca-bundle\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:21.852136 kubelet[2622]: I0805 22:45:21.851873 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-cni-bin-dir\") pod \"calico-node-xbrg9\" (UID: \"93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b\") " pod="calico-system/calico-node-xbrg9" Aug 5 22:45:22.954010 kubelet[2622]: E0805 22:45:22.953955 2622 secret.go:194] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Aug 5 22:45:22.954672 kubelet[2622]: E0805 22:45:22.954089 2622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-node-certs podName:93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b nodeName:}" failed. No retries permitted until 2024-08-05 22:45:23.454061246 +0000 UTC m=+28.072063484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b-node-certs") pod "calico-node-xbrg9" (UID: "93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b") : failed to sync secret cache: timed out waiting for the condition Aug 5 22:45:23.556138 kubelet[2622]: E0805 22:45:23.555663 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:23.559710 kubelet[2622]: I0805 22:45:23.559456 2622 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ab8689-c863-4786-a528-806d2e67ea2c" path="/var/lib/kubelet/pods/13ab8689-c863-4786-a528-806d2e67ea2c/volumes" Aug 5 22:45:23.615378 containerd[1460]: time="2024-08-05T22:45:23.615229774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xbrg9,Uid:93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b,Namespace:calico-system,Attempt:0,}" Aug 5 22:45:23.655977 containerd[1460]: time="2024-08-05T22:45:23.655809356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:23.656427 containerd[1460]: time="2024-08-05T22:45:23.656215136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:23.656427 containerd[1460]: time="2024-08-05T22:45:23.656253035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:23.656427 containerd[1460]: time="2024-08-05T22:45:23.656272962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:23.689779 systemd[1]: Started cri-containerd-f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450.scope - libcontainer container f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450. Aug 5 22:45:23.724421 containerd[1460]: time="2024-08-05T22:45:23.724349541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xbrg9,Uid:93e1ef2f-0cc3-47c7-a8f5-fe188a75ec3b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450\"" Aug 5 22:45:23.729817 containerd[1460]: time="2024-08-05T22:45:23.729764185Z" level=info msg="CreateContainer within sandbox \"f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:45:23.752534 containerd[1460]: time="2024-08-05T22:45:23.751997009Z" level=info msg="CreateContainer within sandbox \"f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3\"" Aug 5 22:45:23.754495 containerd[1460]: time="2024-08-05T22:45:23.753429741Z" level=info msg="StartContainer for \"7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3\"" Aug 5 22:45:23.795748 systemd[1]: Started cri-containerd-7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3.scope - libcontainer container 7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3. Aug 5 22:45:23.840449 containerd[1460]: time="2024-08-05T22:45:23.840219310Z" level=info msg="StartContainer for \"7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3\" returns successfully" Aug 5 22:45:23.856236 systemd[1]: cri-containerd-7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3.scope: Deactivated successfully. Aug 5 22:45:23.902722 containerd[1460]: time="2024-08-05T22:45:23.902543271Z" level=info msg="shim disconnected" id=7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3 namespace=k8s.io Aug 5 22:45:23.903132 containerd[1460]: time="2024-08-05T22:45:23.902746019Z" level=warning msg="cleaning up after shim disconnected" id=7dd0fcc974106618f21b1227b42eb27d1624a7b100b4ced87eadad7987e08ea3 namespace=k8s.io Aug 5 22:45:23.903132 containerd[1460]: time="2024-08-05T22:45:23.902772242Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:45:24.767809 containerd[1460]: time="2024-08-05T22:45:24.767755581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:45:25.557202 kubelet[2622]: E0805 22:45:25.557109 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:27.555635 kubelet[2622]: E0805 22:45:27.555574 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:28.849170 containerd[1460]: time="2024-08-05T22:45:28.849078594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:28.850646 containerd[1460]: time="2024-08-05T22:45:28.850583613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:45:28.852177 containerd[1460]: time="2024-08-05T22:45:28.852063697Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:28.855497 containerd[1460]: time="2024-08-05T22:45:28.855421986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:28.856570 containerd[1460]: time="2024-08-05T22:45:28.856340864Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.088526889s" Aug 5 22:45:28.856570 containerd[1460]: time="2024-08-05T22:45:28.856391929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:45:28.859932 containerd[1460]: time="2024-08-05T22:45:28.859887089Z" level=info msg="CreateContainer within sandbox \"f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:45:28.887561 containerd[1460]: time="2024-08-05T22:45:28.887489483Z" level=info msg="CreateContainer within sandbox \"f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1\"" Aug 5 22:45:28.890494 containerd[1460]: time="2024-08-05T22:45:28.888337663Z" level=info msg="StartContainer for \"33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1\"" Aug 5 22:45:28.942069 systemd[1]: Started cri-containerd-33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1.scope - libcontainer container 33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1. Aug 5 22:45:28.990735 containerd[1460]: time="2024-08-05T22:45:28.990667080Z" level=info msg="StartContainer for \"33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1\" returns successfully" Aug 5 22:45:29.556517 kubelet[2622]: E0805 22:45:29.555219 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:29.561234 kubelet[2622]: I0805 22:45:29.561187 2622 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:45:29.742755 containerd[1460]: time="2024-08-05T22:45:29.742684125Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:45:29.746778 systemd[1]: cri-containerd-33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1.scope: Deactivated successfully. Aug 5 22:45:29.784596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1-rootfs.mount: Deactivated successfully. Aug 5 22:45:29.794239 kubelet[2622]: I0805 22:45:29.794121 2622 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:45:29.832220 kubelet[2622]: I0805 22:45:29.832059 2622 topology_manager.go:215] "Topology Admit Handler" podUID="7539f340-cbde-47bc-b2a5-8717e2e430a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8cbg5" Aug 5 22:45:29.835841 kubelet[2622]: I0805 22:45:29.835796 2622 topology_manager.go:215] "Topology Admit Handler" podUID="cb01de05-8660-4e6b-aa1d-bd76289877a7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5pqbg" Aug 5 22:45:29.842419 kubelet[2622]: I0805 22:45:29.842368 2622 topology_manager.go:215] "Topology Admit Handler" podUID="e56c96de-82fb-44bf-8d12-6a7d6c5ed536" podNamespace="calico-system" podName="calico-kube-controllers-7d999cf867-gtvwz" Aug 5 22:45:29.859257 systemd[1]: Created slice kubepods-burstable-pod7539f340_cbde_47bc_b2a5_8717e2e430a7.slice - libcontainer container kubepods-burstable-pod7539f340_cbde_47bc_b2a5_8717e2e430a7.slice. Aug 5 22:45:29.870038 systemd[1]: Created slice kubepods-burstable-podcb01de05_8660_4e6b_aa1d_bd76289877a7.slice - libcontainer container kubepods-burstable-podcb01de05_8660_4e6b_aa1d_bd76289877a7.slice. Aug 5 22:45:29.879929 systemd[1]: Created slice kubepods-besteffort-pode56c96de_82fb_44bf_8d12_6a7d6c5ed536.slice - libcontainer container kubepods-besteffort-pode56c96de_82fb_44bf_8d12_6a7d6c5ed536.slice. Aug 5 22:45:29.915220 kubelet[2622]: I0805 22:45:29.915141 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjzq5\" (UniqueName: \"kubernetes.io/projected/7539f340-cbde-47bc-b2a5-8717e2e430a7-kube-api-access-fjzq5\") pod \"coredns-7db6d8ff4d-8cbg5\" (UID: \"7539f340-cbde-47bc-b2a5-8717e2e430a7\") " pod="kube-system/coredns-7db6d8ff4d-8cbg5" Aug 5 22:45:29.915220 kubelet[2622]: I0805 22:45:29.915219 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkhs7\" (UniqueName: \"kubernetes.io/projected/cb01de05-8660-4e6b-aa1d-bd76289877a7-kube-api-access-vkhs7\") pod \"coredns-7db6d8ff4d-5pqbg\" (UID: \"cb01de05-8660-4e6b-aa1d-bd76289877a7\") " pod="kube-system/coredns-7db6d8ff4d-5pqbg" Aug 5 22:45:29.915220 kubelet[2622]: I0805 22:45:29.915261 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb01de05-8660-4e6b-aa1d-bd76289877a7-config-volume\") pod \"coredns-7db6d8ff4d-5pqbg\" (UID: \"cb01de05-8660-4e6b-aa1d-bd76289877a7\") " pod="kube-system/coredns-7db6d8ff4d-5pqbg" Aug 5 22:45:29.972123 kubelet[2622]: I0805 22:45:29.915302 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2l7n\" (UniqueName: \"kubernetes.io/projected/e56c96de-82fb-44bf-8d12-6a7d6c5ed536-kube-api-access-d2l7n\") pod \"calico-kube-controllers-7d999cf867-gtvwz\" (UID: \"e56c96de-82fb-44bf-8d12-6a7d6c5ed536\") " pod="calico-system/calico-kube-controllers-7d999cf867-gtvwz" Aug 5 22:45:29.972123 kubelet[2622]: I0805 22:45:29.915334 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e56c96de-82fb-44bf-8d12-6a7d6c5ed536-tigera-ca-bundle\") pod \"calico-kube-controllers-7d999cf867-gtvwz\" (UID: \"e56c96de-82fb-44bf-8d12-6a7d6c5ed536\") " pod="calico-system/calico-kube-controllers-7d999cf867-gtvwz" Aug 5 22:45:29.972123 kubelet[2622]: I0805 22:45:29.915366 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7539f340-cbde-47bc-b2a5-8717e2e430a7-config-volume\") pod \"coredns-7db6d8ff4d-8cbg5\" (UID: \"7539f340-cbde-47bc-b2a5-8717e2e430a7\") " pod="kube-system/coredns-7db6d8ff4d-8cbg5" Aug 5 22:45:30.168836 containerd[1460]: time="2024-08-05T22:45:30.168605829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbg5,Uid:7539f340-cbde-47bc-b2a5-8717e2e430a7,Namespace:kube-system,Attempt:0,}" Aug 5 22:45:30.177934 containerd[1460]: time="2024-08-05T22:45:30.177821784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pqbg,Uid:cb01de05-8660-4e6b-aa1d-bd76289877a7,Namespace:kube-system,Attempt:0,}" Aug 5 22:45:30.188764 containerd[1460]: time="2024-08-05T22:45:30.188698358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d999cf867-gtvwz,Uid:e56c96de-82fb-44bf-8d12-6a7d6c5ed536,Namespace:calico-system,Attempt:0,}" Aug 5 22:45:30.520693 containerd[1460]: time="2024-08-05T22:45:30.520237800Z" level=info msg="shim disconnected" id=33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1 namespace=k8s.io Aug 5 22:45:30.520693 containerd[1460]: time="2024-08-05T22:45:30.520345644Z" level=warning msg="cleaning up after shim disconnected" id=33b755633c389bfa9f7ce9a7a1c6133e7b15e84b90db1ef5fc511e365d196df1 namespace=k8s.io Aug 5 22:45:30.520693 containerd[1460]: time="2024-08-05T22:45:30.520373454Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:45:30.704345 containerd[1460]: time="2024-08-05T22:45:30.703586013Z" level=error msg="Failed to destroy network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.704345 containerd[1460]: time="2024-08-05T22:45:30.704145146Z" level=error msg="encountered an error cleaning up failed sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.704345 containerd[1460]: time="2024-08-05T22:45:30.704222202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d999cf867-gtvwz,Uid:e56c96de-82fb-44bf-8d12-6a7d6c5ed536,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.706352 kubelet[2622]: E0805 22:45:30.704936 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.706352 kubelet[2622]: E0805 22:45:30.705028 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d999cf867-gtvwz" Aug 5 22:45:30.706352 kubelet[2622]: E0805 22:45:30.705062 2622 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d999cf867-gtvwz" Aug 5 22:45:30.707011 kubelet[2622]: E0805 22:45:30.705128 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d999cf867-gtvwz_calico-system(e56c96de-82fb-44bf-8d12-6a7d6c5ed536)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d999cf867-gtvwz_calico-system(e56c96de-82fb-44bf-8d12-6a7d6c5ed536)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d999cf867-gtvwz" podUID="e56c96de-82fb-44bf-8d12-6a7d6c5ed536" Aug 5 22:45:30.724054 containerd[1460]: time="2024-08-05T22:45:30.723263234Z" level=error msg="Failed to destroy network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.724054 containerd[1460]: time="2024-08-05T22:45:30.723860560Z" level=error msg="encountered an error cleaning up failed sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.724054 containerd[1460]: time="2024-08-05T22:45:30.723958958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbg5,Uid:7539f340-cbde-47bc-b2a5-8717e2e430a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.724348 kubelet[2622]: E0805 22:45:30.724244 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.724348 kubelet[2622]: E0805 22:45:30.724321 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbg5" Aug 5 22:45:30.724685 kubelet[2622]: E0805 22:45:30.724371 2622 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8cbg5" Aug 5 22:45:30.724685 kubelet[2622]: E0805 22:45:30.724436 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8cbg5_kube-system(7539f340-cbde-47bc-b2a5-8717e2e430a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8cbg5_kube-system(7539f340-cbde-47bc-b2a5-8717e2e430a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbg5" podUID="7539f340-cbde-47bc-b2a5-8717e2e430a7" Aug 5 22:45:30.728690 containerd[1460]: time="2024-08-05T22:45:30.728637692Z" level=error msg="Failed to destroy network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.729205 containerd[1460]: time="2024-08-05T22:45:30.729160695Z" level=error msg="encountered an error cleaning up failed sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.729353 containerd[1460]: time="2024-08-05T22:45:30.729238381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pqbg,Uid:cb01de05-8660-4e6b-aa1d-bd76289877a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.729620 kubelet[2622]: E0805 22:45:30.729508 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.729620 kubelet[2622]: E0805 22:45:30.729576 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5pqbg" Aug 5 22:45:30.729785 kubelet[2622]: E0805 22:45:30.729621 2622 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5pqbg" Aug 5 22:45:30.730048 kubelet[2622]: E0805 22:45:30.729699 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5pqbg_kube-system(cb01de05-8660-4e6b-aa1d-bd76289877a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5pqbg_kube-system(cb01de05-8660-4e6b-aa1d-bd76289877a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5pqbg" podUID="cb01de05-8660-4e6b-aa1d-bd76289877a7" Aug 5 22:45:30.796108 containerd[1460]: time="2024-08-05T22:45:30.795651619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:45:30.797848 kubelet[2622]: I0805 22:45:30.797117 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:30.798790 containerd[1460]: time="2024-08-05T22:45:30.798129132Z" level=info msg="StopPodSandbox for \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\"" Aug 5 22:45:30.798790 containerd[1460]: time="2024-08-05T22:45:30.798436022Z" level=info msg="Ensure that sandbox 4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9 in task-service has been cleanup successfully" Aug 5 22:45:30.807490 kubelet[2622]: I0805 22:45:30.805573 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:30.807490 kubelet[2622]: I0805 22:45:30.807378 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:30.807741 containerd[1460]: time="2024-08-05T22:45:30.806398869Z" level=info msg="StopPodSandbox for \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\"" Aug 5 22:45:30.808113 containerd[1460]: time="2024-08-05T22:45:30.808069573Z" level=info msg="StopPodSandbox for \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\"" Aug 5 22:45:30.808643 containerd[1460]: time="2024-08-05T22:45:30.808393645Z" level=info msg="Ensure that sandbox 402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2 in task-service has been cleanup successfully" Aug 5 22:45:30.809263 containerd[1460]: time="2024-08-05T22:45:30.808925056Z" level=info msg="Ensure that sandbox de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405 in task-service has been cleanup successfully" Aug 5 22:45:30.914722 containerd[1460]: time="2024-08-05T22:45:30.913925810Z" level=error msg="StopPodSandbox for \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\" failed" error="failed to destroy network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.914722 containerd[1460]: time="2024-08-05T22:45:30.914091841Z" level=error msg="StopPodSandbox for \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\" failed" error="failed to destroy network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.914973 kubelet[2622]: E0805 22:45:30.914337 2622 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:30.914973 kubelet[2622]: E0805 22:45:30.914405 2622 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2"} Aug 5 22:45:30.914973 kubelet[2622]: E0805 22:45:30.914532 2622 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:30.914973 kubelet[2622]: E0805 22:45:30.914579 2622 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9"} Aug 5 22:45:30.914973 kubelet[2622]: E0805 22:45:30.914616 2622 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e56c96de-82fb-44bf-8d12-6a7d6c5ed536\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:45:30.915320 kubelet[2622]: E0805 22:45:30.914655 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e56c96de-82fb-44bf-8d12-6a7d6c5ed536\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d999cf867-gtvwz" podUID="e56c96de-82fb-44bf-8d12-6a7d6c5ed536" Aug 5 22:45:30.915620 kubelet[2622]: E0805 22:45:30.915450 2622 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7539f340-cbde-47bc-b2a5-8717e2e430a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:45:30.915620 kubelet[2622]: E0805 22:45:30.915528 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7539f340-cbde-47bc-b2a5-8717e2e430a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8cbg5" podUID="7539f340-cbde-47bc-b2a5-8717e2e430a7" Aug 5 22:45:30.918998 containerd[1460]: time="2024-08-05T22:45:30.918935754Z" level=error msg="StopPodSandbox for \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\" failed" error="failed to destroy network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:30.919409 kubelet[2622]: E0805 22:45:30.919343 2622 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:30.919542 kubelet[2622]: E0805 22:45:30.919405 2622 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405"} Aug 5 22:45:30.919542 kubelet[2622]: E0805 22:45:30.919455 2622 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb01de05-8660-4e6b-aa1d-bd76289877a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:45:30.919695 kubelet[2622]: E0805 22:45:30.919524 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb01de05-8660-4e6b-aa1d-bd76289877a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5pqbg" podUID="cb01de05-8660-4e6b-aa1d-bd76289877a7" Aug 5 22:45:31.037170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405-shm.mount: Deactivated successfully. Aug 5 22:45:31.037310 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9-shm.mount: Deactivated successfully. Aug 5 22:45:31.564130 systemd[1]: Created slice kubepods-besteffort-pod385bee24_def7_4848_aef0_c366a7421715.slice - libcontainer container kubepods-besteffort-pod385bee24_def7_4848_aef0_c366a7421715.slice. Aug 5 22:45:31.567953 containerd[1460]: time="2024-08-05T22:45:31.567906833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f56wl,Uid:385bee24-def7-4848-aef0-c366a7421715,Namespace:calico-system,Attempt:0,}" Aug 5 22:45:31.674999 containerd[1460]: time="2024-08-05T22:45:31.674932984Z" level=error msg="Failed to destroy network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:31.677654 containerd[1460]: time="2024-08-05T22:45:31.675399028Z" level=error msg="encountered an error cleaning up failed sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:31.677654 containerd[1460]: time="2024-08-05T22:45:31.675504509Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f56wl,Uid:385bee24-def7-4848-aef0-c366a7421715,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:31.678422 kubelet[2622]: E0805 22:45:31.678208 2622 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:31.678422 kubelet[2622]: E0805 22:45:31.678307 2622 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f56wl" Aug 5 22:45:31.678422 kubelet[2622]: E0805 22:45:31.678360 2622 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f56wl" Aug 5 22:45:31.679868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3-shm.mount: Deactivated successfully. Aug 5 22:45:31.682471 kubelet[2622]: E0805 22:45:31.678628 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f56wl_calico-system(385bee24-def7-4848-aef0-c366a7421715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f56wl_calico-system(385bee24-def7-4848-aef0-c366a7421715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:31.811980 kubelet[2622]: I0805 22:45:31.811941 2622 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:31.815585 containerd[1460]: time="2024-08-05T22:45:31.814633856Z" level=info msg="StopPodSandbox for \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\"" Aug 5 22:45:31.815585 containerd[1460]: time="2024-08-05T22:45:31.815016447Z" level=info msg="Ensure that sandbox 830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3 in task-service has been cleanup successfully" Aug 5 22:45:31.902698 containerd[1460]: time="2024-08-05T22:45:31.902622402Z" level=error msg="StopPodSandbox for \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\" failed" error="failed to destroy network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:45:31.903288 kubelet[2622]: E0805 22:45:31.903074 2622 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:31.903288 kubelet[2622]: E0805 22:45:31.903142 2622 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3"} Aug 5 22:45:31.903288 kubelet[2622]: E0805 22:45:31.903197 2622 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"385bee24-def7-4848-aef0-c366a7421715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:45:31.903288 kubelet[2622]: E0805 22:45:31.903239 2622 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"385bee24-def7-4848-aef0-c366a7421715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f56wl" podUID="385bee24-def7-4848-aef0-c366a7421715" Aug 5 22:45:36.937939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537672905.mount: Deactivated successfully. Aug 5 22:45:36.982947 containerd[1460]: time="2024-08-05T22:45:36.982868232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:36.984395 containerd[1460]: time="2024-08-05T22:45:36.984269259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:45:36.986079 containerd[1460]: time="2024-08-05T22:45:36.985995379Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:36.990654 containerd[1460]: time="2024-08-05T22:45:36.990564077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:36.992119 containerd[1460]: time="2024-08-05T22:45:36.991850616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 6.19613583s" Aug 5 22:45:36.992119 containerd[1460]: time="2024-08-05T22:45:36.991909893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:45:37.016884 containerd[1460]: time="2024-08-05T22:45:37.016833640Z" level=info msg="CreateContainer within sandbox \"f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:45:37.043392 containerd[1460]: time="2024-08-05T22:45:37.043328058Z" level=info msg="CreateContainer within sandbox \"f807924dad057f104685cd848346a637b67f7d5d1e777cc15823f6eb76ab0450\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"60cfc6b278f2b4a8eaba30fb61689dc01d855833527fc83db2a0677d962c17a9\"" Aug 5 22:45:37.044569 containerd[1460]: time="2024-08-05T22:45:37.043927987Z" level=info msg="StartContainer for \"60cfc6b278f2b4a8eaba30fb61689dc01d855833527fc83db2a0677d962c17a9\"" Aug 5 22:45:37.082764 systemd[1]: Started cri-containerd-60cfc6b278f2b4a8eaba30fb61689dc01d855833527fc83db2a0677d962c17a9.scope - libcontainer container 60cfc6b278f2b4a8eaba30fb61689dc01d855833527fc83db2a0677d962c17a9. Aug 5 22:45:37.127031 containerd[1460]: time="2024-08-05T22:45:37.126967127Z" level=info msg="StartContainer for \"60cfc6b278f2b4a8eaba30fb61689dc01d855833527fc83db2a0677d962c17a9\" returns successfully" Aug 5 22:45:37.244067 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:45:37.244254 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:45:39.489559 systemd-networkd[1372]: vxlan.calico: Link UP Aug 5 22:45:39.489573 systemd-networkd[1372]: vxlan.calico: Gained carrier Aug 5 22:45:41.433155 systemd-networkd[1372]: vxlan.calico: Gained IPv6LL Aug 5 22:45:42.557052 containerd[1460]: time="2024-08-05T22:45:42.556431353Z" level=info msg="StopPodSandbox for \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\"" Aug 5 22:45:42.557856 containerd[1460]: time="2024-08-05T22:45:42.556480616Z" level=info msg="StopPodSandbox for \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\"" Aug 5 22:45:42.559213 containerd[1460]: time="2024-08-05T22:45:42.556519394Z" level=info msg="StopPodSandbox for \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\"" Aug 5 22:45:42.678646 kubelet[2622]: I0805 22:45:42.678558 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xbrg9" podStartSLOduration=9.452523668 podStartE2EDuration="21.678534426s" podCreationTimestamp="2024-08-05 22:45:21 +0000 UTC" firstStartedPulling="2024-08-05 22:45:24.767286596 +0000 UTC m=+29.385288829" lastFinishedPulling="2024-08-05 22:45:36.993297357 +0000 UTC m=+41.611299587" observedRunningTime="2024-08-05 22:45:37.854657768 +0000 UTC m=+42.472660012" watchObservedRunningTime="2024-08-05 22:45:42.678534426 +0000 UTC m=+47.296536670" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.677 [INFO][4190] k8s.go 608: Cleaning up netns ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.680 [INFO][4190] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" iface="eth0" netns="/var/run/netns/cni-866b1384-e27a-49a7-33cd-7c26ec80064d" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.682 [INFO][4190] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" iface="eth0" netns="/var/run/netns/cni-866b1384-e27a-49a7-33cd-7c26ec80064d" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.686 [INFO][4190] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" iface="eth0" netns="/var/run/netns/cni-866b1384-e27a-49a7-33cd-7c26ec80064d" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.687 [INFO][4190] k8s.go 615: Releasing IP address(es) ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.687 [INFO][4190] utils.go 188: Calico CNI releasing IP address ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.756 [INFO][4206] ipam_plugin.go 411: Releasing address using handleID ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.757 [INFO][4206] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.757 [INFO][4206] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.768 [WARNING][4206] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.768 [INFO][4206] ipam_plugin.go 439: Releasing address using workloadID ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.770 [INFO][4206] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:42.777157 containerd[1460]: 2024-08-05 22:45:42.771 [INFO][4190] k8s.go 621: Teardown processing complete. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:42.777157 containerd[1460]: time="2024-08-05T22:45:42.774166189Z" level=info msg="TearDown network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\" successfully" Aug 5 22:45:42.777157 containerd[1460]: time="2024-08-05T22:45:42.774222872Z" level=info msg="StopPodSandbox for \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\" returns successfully" Aug 5 22:45:42.783246 containerd[1460]: time="2024-08-05T22:45:42.782340874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f56wl,Uid:385bee24-def7-4848-aef0-c366a7421715,Namespace:calico-system,Attempt:1,}" Aug 5 22:45:42.782859 systemd[1]: run-netns-cni\x2d866b1384\x2de27a\x2d49a7\x2d33cd\x2d7c26ec80064d.mount: Deactivated successfully. Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.685 [INFO][4180] k8s.go 608: Cleaning up netns ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.687 [INFO][4180] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" iface="eth0" netns="/var/run/netns/cni-7fa3eecc-187d-9dfe-7225-891c426c508e" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.688 [INFO][4180] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" iface="eth0" netns="/var/run/netns/cni-7fa3eecc-187d-9dfe-7225-891c426c508e" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.688 [INFO][4180] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" iface="eth0" netns="/var/run/netns/cni-7fa3eecc-187d-9dfe-7225-891c426c508e" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.688 [INFO][4180] k8s.go 615: Releasing IP address(es) ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.689 [INFO][4180] utils.go 188: Calico CNI releasing IP address ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.767 [INFO][4207] ipam_plugin.go 411: Releasing address using handleID ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.767 [INFO][4207] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.770 [INFO][4207] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.783 [WARNING][4207] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.784 [INFO][4207] ipam_plugin.go 439: Releasing address using workloadID ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.787 [INFO][4207] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:42.794456 containerd[1460]: 2024-08-05 22:45:42.793 [INFO][4180] k8s.go 621: Teardown processing complete. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:42.801099 containerd[1460]: time="2024-08-05T22:45:42.794764085Z" level=info msg="TearDown network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\" successfully" Aug 5 22:45:42.801099 containerd[1460]: time="2024-08-05T22:45:42.794808478Z" level=info msg="StopPodSandbox for \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\" returns successfully" Aug 5 22:45:42.801771 containerd[1460]: time="2024-08-05T22:45:42.801715667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbg5,Uid:7539f340-cbde-47bc-b2a5-8717e2e430a7,Namespace:kube-system,Attempt:1,}" Aug 5 22:45:42.805423 systemd[1]: run-netns-cni\x2d7fa3eecc\x2d187d\x2d9dfe\x2d7225\x2d891c426c508e.mount: Deactivated successfully. Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.703 [INFO][4186] k8s.go 608: Cleaning up netns ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.703 [INFO][4186] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" iface="eth0" netns="/var/run/netns/cni-053d613c-d4f1-2d1b-448f-91964a4c5b8b" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.704 [INFO][4186] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" iface="eth0" netns="/var/run/netns/cni-053d613c-d4f1-2d1b-448f-91964a4c5b8b" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.704 [INFO][4186] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" iface="eth0" netns="/var/run/netns/cni-053d613c-d4f1-2d1b-448f-91964a4c5b8b" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.704 [INFO][4186] k8s.go 615: Releasing IP address(es) ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.704 [INFO][4186] utils.go 188: Calico CNI releasing IP address ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.767 [INFO][4214] ipam_plugin.go 411: Releasing address using handleID ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.768 [INFO][4214] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.787 [INFO][4214] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.824 [WARNING][4214] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.824 [INFO][4214] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.830 [INFO][4214] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:42.836300 containerd[1460]: 2024-08-05 22:45:42.833 [INFO][4186] k8s.go 621: Teardown processing complete. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:42.843485 containerd[1460]: time="2024-08-05T22:45:42.842194042Z" level=info msg="TearDown network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\" successfully" Aug 5 22:45:42.843713 containerd[1460]: time="2024-08-05T22:45:42.843678727Z" level=info msg="StopPodSandbox for \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\" returns successfully" Aug 5 22:45:42.845064 systemd[1]: run-netns-cni\x2d053d613c\x2dd4f1\x2d2d1b\x2d448f\x2d91964a4c5b8b.mount: Deactivated successfully. Aug 5 22:45:42.850178 containerd[1460]: time="2024-08-05T22:45:42.849499511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pqbg,Uid:cb01de05-8660-4e6b-aa1d-bd76289877a7,Namespace:kube-system,Attempt:1,}" Aug 5 22:45:43.112967 systemd-networkd[1372]: cali93e8645ce0b: Link UP Aug 5 22:45:43.113396 systemd-networkd[1372]: cali93e8645ce0b: Gained carrier Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:42.948 [INFO][4226] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0 csi-node-driver- calico-system 385bee24-def7-4848-aef0-c366a7421715 771 0 2024-08-05 22:45:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal csi-node-driver-f56wl eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali93e8645ce0b [] []}} ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:42.948 [INFO][4226] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.032 [INFO][4262] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" HandleID="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.051 [INFO][4262] ipam_plugin.go 264: Auto assigning IP ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" HandleID="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000362290), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", "pod":"csi-node-driver-f56wl", "timestamp":"2024-08-05 22:45:43.032240604 +0000 UTC"}, Hostname:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.052 [INFO][4262] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.052 [INFO][4262] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.052 [INFO][4262] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal' Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.055 [INFO][4262] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.062 [INFO][4262] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.070 [INFO][4262] ipam.go 489: Trying affinity for 192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.073 [INFO][4262] ipam.go 155: Attempting to load block cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.076 [INFO][4262] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.076 [INFO][4262] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.078 [INFO][4262] ipam.go 1685: Creating new handle: k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9 Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.087 [INFO][4262] ipam.go 1203: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.097 [INFO][4262] ipam.go 1216: Successfully claimed IPs: [192.168.127.65/26] block=192.168.127.64/26 handle="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.097 [INFO][4262] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.65/26] handle="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.098 [INFO][4262] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:43.155579 containerd[1460]: 2024-08-05 22:45:43.098 [INFO][4262] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.127.65/26] IPv6=[] ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" HandleID="k8s-pod-network.f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:43.156727 containerd[1460]: 2024-08-05 22:45:43.105 [INFO][4226] k8s.go 386: Populated endpoint ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"385bee24-def7-4848-aef0-c366a7421715", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-f56wl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali93e8645ce0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:43.156727 containerd[1460]: 2024-08-05 22:45:43.107 [INFO][4226] k8s.go 387: Calico CNI using IPs: [192.168.127.65/32] ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:43.156727 containerd[1460]: 2024-08-05 22:45:43.107 [INFO][4226] dataplane_linux.go 68: Setting the host side veth name to cali93e8645ce0b ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:43.156727 containerd[1460]: 2024-08-05 22:45:43.113 [INFO][4226] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:43.156727 containerd[1460]: 2024-08-05 22:45:43.116 [INFO][4226] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"385bee24-def7-4848-aef0-c366a7421715", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9", Pod:"csi-node-driver-f56wl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali93e8645ce0b", MAC:"9a:46:4b:b9:ea:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:43.156727 containerd[1460]: 2024-08-05 22:45:43.149 [INFO][4226] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9" Namespace="calico-system" Pod="csi-node-driver-f56wl" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:43.210438 containerd[1460]: time="2024-08-05T22:45:43.210278309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:43.218525 systemd-networkd[1372]: califa6876369b0: Link UP Aug 5 22:45:43.221650 systemd-networkd[1372]: califa6876369b0: Gained carrier Aug 5 22:45:43.223256 containerd[1460]: time="2024-08-05T22:45:43.222471048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:43.224572 containerd[1460]: time="2024-08-05T22:45:43.222567964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:43.224572 containerd[1460]: time="2024-08-05T22:45:43.222919317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:42.958 [INFO][4237] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0 coredns-7db6d8ff4d- kube-system 7539f340-cbde-47bc-b2a5-8717e2e430a7 772 0 2024-08-05 22:45:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal coredns-7db6d8ff4d-8cbg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa6876369b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:42.958 [INFO][4237] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.040 [INFO][4263] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" HandleID="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.065 [INFO][4263] ipam_plugin.go 264: Auto assigning IP ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" HandleID="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000d4820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-8cbg5", "timestamp":"2024-08-05 22:45:43.040178123 +0000 UTC"}, Hostname:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.065 [INFO][4263] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.098 [INFO][4263] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.099 [INFO][4263] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal' Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.104 [INFO][4263] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.121 [INFO][4263] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.153 [INFO][4263] ipam.go 489: Trying affinity for 192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.157 [INFO][4263] ipam.go 155: Attempting to load block cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.164 [INFO][4263] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.164 [INFO][4263] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.169 [INFO][4263] ipam.go 1685: Creating new handle: k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.178 [INFO][4263] ipam.go 1203: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.189 [INFO][4263] ipam.go 1216: Successfully claimed IPs: [192.168.127.66/26] block=192.168.127.64/26 handle="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.189 [INFO][4263] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.66/26] handle="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.189 [INFO][4263] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:43.256712 containerd[1460]: 2024-08-05 22:45:43.189 [INFO][4263] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.127.66/26] IPv6=[] ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" HandleID="k8s-pod-network.fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:43.259527 containerd[1460]: 2024-08-05 22:45:43.195 [INFO][4237] k8s.go 386: Populated endpoint ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7539f340-cbde-47bc-b2a5-8717e2e430a7", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-8cbg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa6876369b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:43.259527 containerd[1460]: 2024-08-05 22:45:43.195 [INFO][4237] k8s.go 387: Calico CNI using IPs: [192.168.127.66/32] ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:43.259527 containerd[1460]: 2024-08-05 22:45:43.196 [INFO][4237] dataplane_linux.go 68: Setting the host side veth name to califa6876369b0 ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:43.259527 containerd[1460]: 2024-08-05 22:45:43.226 [INFO][4237] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:43.259527 containerd[1460]: 2024-08-05 22:45:43.227 [INFO][4237] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7539f340-cbde-47bc-b2a5-8717e2e430a7", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c", Pod:"coredns-7db6d8ff4d-8cbg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa6876369b0", MAC:"3e:e8:ff:47:dd:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:43.259527 containerd[1460]: 2024-08-05 22:45:43.246 [INFO][4237] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8cbg5" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:43.283729 systemd[1]: Started cri-containerd-f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9.scope - libcontainer container f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9. Aug 5 22:45:43.302216 systemd-networkd[1372]: cali857a8b38439: Link UP Aug 5 22:45:43.306848 systemd-networkd[1372]: cali857a8b38439: Gained carrier Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.010 [INFO][4248] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0 coredns-7db6d8ff4d- kube-system cb01de05-8660-4e6b-aa1d-bd76289877a7 773 0 2024-08-05 22:45:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal coredns-7db6d8ff4d-5pqbg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali857a8b38439 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.011 [INFO][4248] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.090 [INFO][4275] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" HandleID="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.114 [INFO][4275] ipam_plugin.go 264: Auto assigning IP ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" HandleID="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051000), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-5pqbg", "timestamp":"2024-08-05 22:45:43.090383245 +0000 UTC"}, Hostname:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.114 [INFO][4275] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.190 [INFO][4275] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.190 [INFO][4275] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal' Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.197 [INFO][4275] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.212 [INFO][4275] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.235 [INFO][4275] ipam.go 489: Trying affinity for 192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.243 [INFO][4275] ipam.go 155: Attempting to load block cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.251 [INFO][4275] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.251 [INFO][4275] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.255 [INFO][4275] ipam.go 1685: Creating new handle: k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606 Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.270 [INFO][4275] ipam.go 1203: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.287 [INFO][4275] ipam.go 1216: Successfully claimed IPs: [192.168.127.67/26] block=192.168.127.64/26 handle="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.288 [INFO][4275] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.67/26] handle="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.288 [INFO][4275] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:43.344001 containerd[1460]: 2024-08-05 22:45:43.289 [INFO][4275] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.127.67/26] IPv6=[] ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" HandleID="k8s-pod-network.fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:43.345186 containerd[1460]: 2024-08-05 22:45:43.294 [INFO][4248] k8s.go 386: Populated endpoint ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cb01de05-8660-4e6b-aa1d-bd76289877a7", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-5pqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857a8b38439", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:43.345186 containerd[1460]: 2024-08-05 22:45:43.295 [INFO][4248] k8s.go 387: Calico CNI using IPs: [192.168.127.67/32] ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:43.345186 containerd[1460]: 2024-08-05 22:45:43.295 [INFO][4248] dataplane_linux.go 68: Setting the host side veth name to cali857a8b38439 ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:43.345186 containerd[1460]: 2024-08-05 22:45:43.307 [INFO][4248] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:43.345186 containerd[1460]: 2024-08-05 22:45:43.308 [INFO][4248] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cb01de05-8660-4e6b-aa1d-bd76289877a7", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606", Pod:"coredns-7db6d8ff4d-5pqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857a8b38439", MAC:"b6:b6:29:fb:56:c8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:43.345186 containerd[1460]: 2024-08-05 22:45:43.334 [INFO][4248] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pqbg" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:43.405212 containerd[1460]: time="2024-08-05T22:45:43.404706634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:43.405212 containerd[1460]: time="2024-08-05T22:45:43.404796677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:43.405212 containerd[1460]: time="2024-08-05T22:45:43.404831322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:43.405212 containerd[1460]: time="2024-08-05T22:45:43.404855133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:43.407110 containerd[1460]: time="2024-08-05T22:45:43.405960151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:43.407110 containerd[1460]: time="2024-08-05T22:45:43.406032856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:43.407110 containerd[1460]: time="2024-08-05T22:45:43.406071480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:43.407110 containerd[1460]: time="2024-08-05T22:45:43.406095945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:43.421865 containerd[1460]: time="2024-08-05T22:45:43.421649589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f56wl,Uid:385bee24-def7-4848-aef0-c366a7421715,Namespace:calico-system,Attempt:1,} returns sandbox id \"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9\"" Aug 5 22:45:43.432221 containerd[1460]: time="2024-08-05T22:45:43.431489690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:45:43.453957 systemd[1]: Started cri-containerd-fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606.scope - libcontainer container fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606. Aug 5 22:45:43.473172 systemd[1]: Started cri-containerd-fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c.scope - libcontainer container fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c. Aug 5 22:45:43.548793 containerd[1460]: time="2024-08-05T22:45:43.548738089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pqbg,Uid:cb01de05-8660-4e6b-aa1d-bd76289877a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606\"" Aug 5 22:45:43.555808 containerd[1460]: time="2024-08-05T22:45:43.555751484Z" level=info msg="CreateContainer within sandbox \"fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:45:43.557413 containerd[1460]: time="2024-08-05T22:45:43.557338470Z" level=info msg="StopPodSandbox for \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\"" Aug 5 22:45:43.597619 containerd[1460]: time="2024-08-05T22:45:43.596427776Z" level=info msg="CreateContainer within sandbox \"fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"53cdb019677a5d2fb656e82cb27cfa35cf9ee30b27845379553a64d2d9d8213d\"" Aug 5 22:45:43.598517 containerd[1460]: time="2024-08-05T22:45:43.598447710Z" level=info msg="StartContainer for \"53cdb019677a5d2fb656e82cb27cfa35cf9ee30b27845379553a64d2d9d8213d\"" Aug 5 22:45:43.599811 containerd[1460]: time="2024-08-05T22:45:43.599755774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8cbg5,Uid:7539f340-cbde-47bc-b2a5-8717e2e430a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c\"" Aug 5 22:45:43.611162 containerd[1460]: time="2024-08-05T22:45:43.611085329Z" level=info msg="CreateContainer within sandbox \"fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:45:43.636948 containerd[1460]: time="2024-08-05T22:45:43.636761239Z" level=info msg="CreateContainer within sandbox \"fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9e331f179d2e3aaf93c6829db2b9ae8741f3ec3b2fd02d3e558382aa4ef5ea26\"" Aug 5 22:45:43.640517 containerd[1460]: time="2024-08-05T22:45:43.639963432Z" level=info msg="StartContainer for \"9e331f179d2e3aaf93c6829db2b9ae8741f3ec3b2fd02d3e558382aa4ef5ea26\"" Aug 5 22:45:43.658668 systemd[1]: Started cri-containerd-53cdb019677a5d2fb656e82cb27cfa35cf9ee30b27845379553a64d2d9d8213d.scope - libcontainer container 53cdb019677a5d2fb656e82cb27cfa35cf9ee30b27845379553a64d2d9d8213d. Aug 5 22:45:43.718999 systemd[1]: Started cri-containerd-9e331f179d2e3aaf93c6829db2b9ae8741f3ec3b2fd02d3e558382aa4ef5ea26.scope - libcontainer container 9e331f179d2e3aaf93c6829db2b9ae8741f3ec3b2fd02d3e558382aa4ef5ea26. Aug 5 22:45:43.761332 containerd[1460]: time="2024-08-05T22:45:43.761127433Z" level=info msg="StartContainer for \"53cdb019677a5d2fb656e82cb27cfa35cf9ee30b27845379553a64d2d9d8213d\" returns successfully" Aug 5 22:45:43.870850 containerd[1460]: time="2024-08-05T22:45:43.870776725Z" level=info msg="StartContainer for \"9e331f179d2e3aaf93c6829db2b9ae8741f3ec3b2fd02d3e558382aa4ef5ea26\" returns successfully" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.729 [INFO][4453] k8s.go 608: Cleaning up netns ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.733 [INFO][4453] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" iface="eth0" netns="/var/run/netns/cni-2cfbe15f-4fc6-5a5b-f3fb-453a57a7e627" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.734 [INFO][4453] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" iface="eth0" netns="/var/run/netns/cni-2cfbe15f-4fc6-5a5b-f3fb-453a57a7e627" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.734 [INFO][4453] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" iface="eth0" netns="/var/run/netns/cni-2cfbe15f-4fc6-5a5b-f3fb-453a57a7e627" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.734 [INFO][4453] k8s.go 615: Releasing IP address(es) ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.734 [INFO][4453] utils.go 188: Calico CNI releasing IP address ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.832 [INFO][4514] ipam_plugin.go 411: Releasing address using handleID ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.832 [INFO][4514] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.832 [INFO][4514] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.873 [WARNING][4514] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.873 [INFO][4514] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.889 [INFO][4514] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:43.898046 containerd[1460]: 2024-08-05 22:45:43.894 [INFO][4453] k8s.go 621: Teardown processing complete. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:43.899642 containerd[1460]: time="2024-08-05T22:45:43.899370332Z" level=info msg="TearDown network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\" successfully" Aug 5 22:45:43.899642 containerd[1460]: time="2024-08-05T22:45:43.899450188Z" level=info msg="StopPodSandbox for \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\" returns successfully" Aug 5 22:45:43.900317 containerd[1460]: time="2024-08-05T22:45:43.900279993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d999cf867-gtvwz,Uid:e56c96de-82fb-44bf-8d12-6a7d6c5ed536,Namespace:calico-system,Attempt:1,}" Aug 5 22:45:43.910882 systemd[1]: run-netns-cni\x2d2cfbe15f\x2d4fc6\x2d5a5b\x2df3fb\x2d453a57a7e627.mount: Deactivated successfully. Aug 5 22:45:43.939822 kubelet[2622]: I0805 22:45:43.936855 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5pqbg" podStartSLOduration=34.936830638 podStartE2EDuration="34.936830638s" podCreationTimestamp="2024-08-05 22:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:45:43.930027208 +0000 UTC m=+48.548029453" watchObservedRunningTime="2024-08-05 22:45:43.936830638 +0000 UTC m=+48.554832885" Aug 5 22:45:44.182338 systemd-networkd[1372]: cali936cefa85d1: Link UP Aug 5 22:45:44.184671 systemd-networkd[1372]: cali936cefa85d1: Gained carrier Aug 5 22:45:44.197981 kubelet[2622]: I0805 22:45:44.197137 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8cbg5" podStartSLOduration=35.19710444 podStartE2EDuration="35.19710444s" podCreationTimestamp="2024-08-05 22:45:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:45:44.03682842 +0000 UTC m=+48.654830664" watchObservedRunningTime="2024-08-05 22:45:44.19710444 +0000 UTC m=+48.815106683" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.051 [INFO][4547] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0 calico-kube-controllers-7d999cf867- calico-system e56c96de-82fb-44bf-8d12-6a7d6c5ed536 793 0 2024-08-05 22:45:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d999cf867 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal calico-kube-controllers-7d999cf867-gtvwz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali936cefa85d1 [] []}} ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.052 [INFO][4547] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.116 [INFO][4563] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" HandleID="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.127 [INFO][4563] ipam_plugin.go 264: Auto assigning IP ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" HandleID="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f4460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", "pod":"calico-kube-controllers-7d999cf867-gtvwz", "timestamp":"2024-08-05 22:45:44.116600189 +0000 UTC"}, Hostname:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.127 [INFO][4563] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.127 [INFO][4563] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.127 [INFO][4563] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal' Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.130 [INFO][4563] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.134 [INFO][4563] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.139 [INFO][4563] ipam.go 489: Trying affinity for 192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.142 [INFO][4563] ipam.go 155: Attempting to load block cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.150 [INFO][4563] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.151 [INFO][4563] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.153 [INFO][4563] ipam.go 1685: Creating new handle: k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6 Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.162 [INFO][4563] ipam.go 1203: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.171 [INFO][4563] ipam.go 1216: Successfully claimed IPs: [192.168.127.68/26] block=192.168.127.64/26 handle="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.171 [INFO][4563] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.68/26] handle="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.172 [INFO][4563] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:44.202160 containerd[1460]: 2024-08-05 22:45:44.172 [INFO][4563] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.127.68/26] IPv6=[] ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" HandleID="k8s-pod-network.bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:44.206523 containerd[1460]: 2024-08-05 22:45:44.174 [INFO][4547] k8s.go 386: Populated endpoint ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0", GenerateName:"calico-kube-controllers-7d999cf867-", Namespace:"calico-system", SelfLink:"", UID:"e56c96de-82fb-44bf-8d12-6a7d6c5ed536", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d999cf867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-7d999cf867-gtvwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936cefa85d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:44.206523 containerd[1460]: 2024-08-05 22:45:44.175 [INFO][4547] k8s.go 387: Calico CNI using IPs: [192.168.127.68/32] ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:44.206523 containerd[1460]: 2024-08-05 22:45:44.175 [INFO][4547] dataplane_linux.go 68: Setting the host side veth name to cali936cefa85d1 ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:44.206523 containerd[1460]: 2024-08-05 22:45:44.177 [INFO][4547] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:44.206523 containerd[1460]: 2024-08-05 22:45:44.178 [INFO][4547] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0", GenerateName:"calico-kube-controllers-7d999cf867-", Namespace:"calico-system", SelfLink:"", UID:"e56c96de-82fb-44bf-8d12-6a7d6c5ed536", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d999cf867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6", Pod:"calico-kube-controllers-7d999cf867-gtvwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936cefa85d1", MAC:"ea:20:d5:0f:90:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:44.206523 containerd[1460]: 2024-08-05 22:45:44.196 [INFO][4547] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6" Namespace="calico-system" Pod="calico-kube-controllers-7d999cf867-gtvwz" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:44.247644 containerd[1460]: time="2024-08-05T22:45:44.247295160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:45:44.247644 containerd[1460]: time="2024-08-05T22:45:44.247393553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:44.247644 containerd[1460]: time="2024-08-05T22:45:44.247542694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:45:44.248069 containerd[1460]: time="2024-08-05T22:45:44.247663373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:45:44.280605 systemd[1]: Started cri-containerd-bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6.scope - libcontainer container bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6. Aug 5 22:45:44.348265 containerd[1460]: time="2024-08-05T22:45:44.348065155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d999cf867-gtvwz,Uid:e56c96de-82fb-44bf-8d12-6a7d6c5ed536,Namespace:calico-system,Attempt:1,} returns sandbox id \"bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6\"" Aug 5 22:45:44.376684 systemd-networkd[1372]: califa6876369b0: Gained IPv6LL Aug 5 22:45:44.637160 containerd[1460]: time="2024-08-05T22:45:44.637074539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:44.638709 containerd[1460]: time="2024-08-05T22:45:44.638629277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:45:44.641525 containerd[1460]: time="2024-08-05T22:45:44.639967482Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:44.643923 containerd[1460]: time="2024-08-05T22:45:44.643874179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:44.645724 containerd[1460]: time="2024-08-05T22:45:44.645655447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.214107339s" Aug 5 22:45:44.645988 containerd[1460]: time="2024-08-05T22:45:44.645932117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:45:44.647918 containerd[1460]: time="2024-08-05T22:45:44.647877783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:45:44.651227 containerd[1460]: time="2024-08-05T22:45:44.651117186Z" level=info msg="CreateContainer within sandbox \"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:45:44.683335 containerd[1460]: time="2024-08-05T22:45:44.683260758Z" level=info msg="CreateContainer within sandbox \"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d2f79c314f08b013a8481d45f45a9f7abaf7cdefac1a8a25029d94fb02b80433\"" Aug 5 22:45:44.686206 containerd[1460]: time="2024-08-05T22:45:44.686154205Z" level=info msg="StartContainer for \"d2f79c314f08b013a8481d45f45a9f7abaf7cdefac1a8a25029d94fb02b80433\"" Aug 5 22:45:44.731737 systemd[1]: Started cri-containerd-d2f79c314f08b013a8481d45f45a9f7abaf7cdefac1a8a25029d94fb02b80433.scope - libcontainer container d2f79c314f08b013a8481d45f45a9f7abaf7cdefac1a8a25029d94fb02b80433. Aug 5 22:45:44.775677 containerd[1460]: time="2024-08-05T22:45:44.774751432Z" level=info msg="StartContainer for \"d2f79c314f08b013a8481d45f45a9f7abaf7cdefac1a8a25029d94fb02b80433\" returns successfully" Aug 5 22:45:44.954307 systemd-networkd[1372]: cali93e8645ce0b: Gained IPv6LL Aug 5 22:45:45.080769 systemd-networkd[1372]: cali857a8b38439: Gained IPv6LL Aug 5 22:45:45.274008 systemd-networkd[1372]: cali936cefa85d1: Gained IPv6LL Aug 5 22:45:46.820382 containerd[1460]: time="2024-08-05T22:45:46.820230410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:46.821998 containerd[1460]: time="2024-08-05T22:45:46.821910611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:45:46.823763 containerd[1460]: time="2024-08-05T22:45:46.823715842Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:46.827144 containerd[1460]: time="2024-08-05T22:45:46.827039804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:46.828854 containerd[1460]: time="2024-08-05T22:45:46.828130324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.180180714s" Aug 5 22:45:46.828854 containerd[1460]: time="2024-08-05T22:45:46.828180615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:45:46.829629 containerd[1460]: time="2024-08-05T22:45:46.829553309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:45:46.855333 containerd[1460]: time="2024-08-05T22:45:46.855192856Z" level=info msg="CreateContainer within sandbox \"bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:45:46.886501 containerd[1460]: time="2024-08-05T22:45:46.886065649Z" level=info msg="CreateContainer within sandbox \"bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"adbb9e0f39eb14ca5e9804d529a874a722a0a13b7599436593ca3c7145054e0e\"" Aug 5 22:45:46.888730 containerd[1460]: time="2024-08-05T22:45:46.888428052Z" level=info msg="StartContainer for \"adbb9e0f39eb14ca5e9804d529a874a722a0a13b7599436593ca3c7145054e0e\"" Aug 5 22:45:46.973720 systemd[1]: Started cri-containerd-adbb9e0f39eb14ca5e9804d529a874a722a0a13b7599436593ca3c7145054e0e.scope - libcontainer container adbb9e0f39eb14ca5e9804d529a874a722a0a13b7599436593ca3c7145054e0e. Aug 5 22:45:47.064292 containerd[1460]: time="2024-08-05T22:45:47.063386596Z" level=info msg="StartContainer for \"adbb9e0f39eb14ca5e9804d529a874a722a0a13b7599436593ca3c7145054e0e\" returns successfully" Aug 5 22:45:47.645817 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.127.64:123 Aug 5 22:45:47.645948 ntpd[1428]: Listen normally on 8 vxlan.calico [fe80::644e:95ff:fe93:e60e%4]:123 Aug 5 22:45:47.646608 ntpd[1428]: 5 Aug 22:45:47 ntpd[1428]: Listen normally on 7 vxlan.calico 192.168.127.64:123 Aug 5 22:45:47.646608 ntpd[1428]: 5 Aug 22:45:47 ntpd[1428]: Listen normally on 8 vxlan.calico [fe80::644e:95ff:fe93:e60e%4]:123 Aug 5 22:45:47.646608 ntpd[1428]: 5 Aug 22:45:47 ntpd[1428]: Listen normally on 9 cali93e8645ce0b [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:45:47.646608 ntpd[1428]: 5 Aug 22:45:47 ntpd[1428]: Listen normally on 10 califa6876369b0 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:45:47.646034 ntpd[1428]: Listen normally on 9 cali93e8645ce0b [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:45:47.646922 ntpd[1428]: 5 Aug 22:45:47 ntpd[1428]: Listen normally on 11 cali857a8b38439 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:45:47.646922 ntpd[1428]: 5 Aug 22:45:47 ntpd[1428]: Listen normally on 12 cali936cefa85d1 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:45:47.646096 ntpd[1428]: Listen normally on 10 califa6876369b0 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:45:47.646633 ntpd[1428]: Listen normally on 11 cali857a8b38439 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:45:47.646695 ntpd[1428]: Listen normally on 12 cali936cefa85d1 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:45:47.970400 kubelet[2622]: I0805 22:45:47.970219 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d999cf867-gtvwz" podStartSLOduration=29.491876814 podStartE2EDuration="31.970182355s" podCreationTimestamp="2024-08-05 22:45:16 +0000 UTC" firstStartedPulling="2024-08-05 22:45:44.351022028 +0000 UTC m=+48.969024259" lastFinishedPulling="2024-08-05 22:45:46.829327567 +0000 UTC m=+51.447329800" observedRunningTime="2024-08-05 22:45:47.967721871 +0000 UTC m=+52.585724115" watchObservedRunningTime="2024-08-05 22:45:47.970182355 +0000 UTC m=+52.588184602" Aug 5 22:45:48.344657 containerd[1460]: time="2024-08-05T22:45:48.344436833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:48.348345 containerd[1460]: time="2024-08-05T22:45:48.348268479Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:45:48.351121 containerd[1460]: time="2024-08-05T22:45:48.351065385Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:48.367136 containerd[1460]: time="2024-08-05T22:45:48.367033761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:45:48.368344 containerd[1460]: time="2024-08-05T22:45:48.368297747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.538645829s" Aug 5 22:45:48.369703 containerd[1460]: time="2024-08-05T22:45:48.369501285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:45:48.372898 containerd[1460]: time="2024-08-05T22:45:48.372830822Z" level=info msg="CreateContainer within sandbox \"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:45:48.398219 containerd[1460]: time="2024-08-05T22:45:48.397806655Z" level=info msg="CreateContainer within sandbox \"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"61da7a5e995e025dd895aa5dc83c7d18bb9b081d05057144b0831f262f1199e9\"" Aug 5 22:45:48.402496 containerd[1460]: time="2024-08-05T22:45:48.401360989Z" level=info msg="StartContainer for \"61da7a5e995e025dd895aa5dc83c7d18bb9b081d05057144b0831f262f1199e9\"" Aug 5 22:45:48.485891 systemd[1]: Started cri-containerd-61da7a5e995e025dd895aa5dc83c7d18bb9b081d05057144b0831f262f1199e9.scope - libcontainer container 61da7a5e995e025dd895aa5dc83c7d18bb9b081d05057144b0831f262f1199e9. Aug 5 22:45:48.550115 containerd[1460]: time="2024-08-05T22:45:48.549535732Z" level=info msg="StartContainer for \"61da7a5e995e025dd895aa5dc83c7d18bb9b081d05057144b0831f262f1199e9\" returns successfully" Aug 5 22:45:48.746415 kubelet[2622]: I0805 22:45:48.746301 2622 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:45:48.746415 kubelet[2622]: I0805 22:45:48.746348 2622 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:45:48.957410 kubelet[2622]: I0805 22:45:48.957326 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-f56wl" podStartSLOduration=29.014662887 podStartE2EDuration="33.957297766s" podCreationTimestamp="2024-08-05 22:45:15 +0000 UTC" firstStartedPulling="2024-08-05 22:45:43.428139186 +0000 UTC m=+48.046141420" lastFinishedPulling="2024-08-05 22:45:48.370774067 +0000 UTC m=+52.988776299" observedRunningTime="2024-08-05 22:45:48.955537915 +0000 UTC m=+53.573540159" watchObservedRunningTime="2024-08-05 22:45:48.957297766 +0000 UTC m=+53.575300011" Aug 5 22:45:55.528925 containerd[1460]: time="2024-08-05T22:45:55.528115709Z" level=info msg="StopPodSandbox for \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\"" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.593 [WARNING][4800] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0", GenerateName:"calico-kube-controllers-7d999cf867-", Namespace:"calico-system", SelfLink:"", UID:"e56c96de-82fb-44bf-8d12-6a7d6c5ed536", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d999cf867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6", Pod:"calico-kube-controllers-7d999cf867-gtvwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936cefa85d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.594 [INFO][4800] k8s.go 608: Cleaning up netns ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.594 [INFO][4800] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" iface="eth0" netns="" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.594 [INFO][4800] k8s.go 615: Releasing IP address(es) ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.594 [INFO][4800] utils.go 188: Calico CNI releasing IP address ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.623 [INFO][4808] ipam_plugin.go 411: Releasing address using handleID ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.623 [INFO][4808] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.624 [INFO][4808] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.646 [WARNING][4808] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.646 [INFO][4808] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.660 [INFO][4808] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:55.665999 containerd[1460]: 2024-08-05 22:45:55.663 [INFO][4800] k8s.go 621: Teardown processing complete. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.665999 containerd[1460]: time="2024-08-05T22:45:55.665793125Z" level=info msg="TearDown network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\" successfully" Aug 5 22:45:55.665999 containerd[1460]: time="2024-08-05T22:45:55.665829010Z" level=info msg="StopPodSandbox for \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\" returns successfully" Aug 5 22:45:55.669682 containerd[1460]: time="2024-08-05T22:45:55.667668899Z" level=info msg="RemovePodSandbox for \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\"" Aug 5 22:45:55.669682 containerd[1460]: time="2024-08-05T22:45:55.667722603Z" level=info msg="Forcibly stopping sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\"" Aug 5 22:45:55.824449 systemd[1]: Started sshd@9-10.128.0.28:22-139.178.68.195:39070.service - OpenSSH per-connection server daemon (139.178.68.195:39070). Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.831 [WARNING][4826] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0", GenerateName:"calico-kube-controllers-7d999cf867-", Namespace:"calico-system", SelfLink:"", UID:"e56c96de-82fb-44bf-8d12-6a7d6c5ed536", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d999cf867", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"bbc0ec0777199050324d21426beab695b12195c419f18f50cc7e801bdb259fa6", Pod:"calico-kube-controllers-7d999cf867-gtvwz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali936cefa85d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.835 [INFO][4826] k8s.go 608: Cleaning up netns ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.835 [INFO][4826] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" iface="eth0" netns="" Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.835 [INFO][4826] k8s.go 615: Releasing IP address(es) ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.835 [INFO][4826] utils.go 188: Calico CNI releasing IP address ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.887 [INFO][4835] ipam_plugin.go 411: Releasing address using handleID ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.887 [INFO][4835] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.887 [INFO][4835] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.896 [WARNING][4835] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.896 [INFO][4835] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" HandleID="k8s-pod-network.4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--kube--controllers--7d999cf867--gtvwz-eth0" Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.898 [INFO][4835] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:55.902242 containerd[1460]: 2024-08-05 22:45:55.900 [INFO][4826] k8s.go 621: Teardown processing complete. ContainerID="4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9" Aug 5 22:45:55.902242 containerd[1460]: time="2024-08-05T22:45:55.902139213Z" level=info msg="TearDown network for sandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\" successfully" Aug 5 22:45:55.908273 containerd[1460]: time="2024-08-05T22:45:55.907967397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:45:55.908273 containerd[1460]: time="2024-08-05T22:45:55.908069987Z" level=info msg="RemovePodSandbox \"4cdbdbd76a2d2cdfcba8775bb8cec53cf29e5c4ab44eae2e36557999aba7ecc9\" returns successfully" Aug 5 22:45:55.909489 containerd[1460]: time="2024-08-05T22:45:55.909237372Z" level=info msg="StopPodSandbox for \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\"" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:55.959 [WARNING][4856] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"385bee24-def7-4848-aef0-c366a7421715", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9", Pod:"csi-node-driver-f56wl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali93e8645ce0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:55.959 [INFO][4856] k8s.go 608: Cleaning up netns ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:55.959 [INFO][4856] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" iface="eth0" netns="" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:55.960 [INFO][4856] k8s.go 615: Releasing IP address(es) ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:55.960 [INFO][4856] utils.go 188: Calico CNI releasing IP address ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:56.010 [INFO][4862] ipam_plugin.go 411: Releasing address using handleID ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:56.010 [INFO][4862] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:56.010 [INFO][4862] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:56.019 [WARNING][4862] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:56.020 [INFO][4862] ipam_plugin.go 439: Releasing address using workloadID ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:56.022 [INFO][4862] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:56.025137 containerd[1460]: 2024-08-05 22:45:56.023 [INFO][4856] k8s.go 621: Teardown processing complete. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.026073 containerd[1460]: time="2024-08-05T22:45:56.025172295Z" level=info msg="TearDown network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\" successfully" Aug 5 22:45:56.026073 containerd[1460]: time="2024-08-05T22:45:56.025208497Z" level=info msg="StopPodSandbox for \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\" returns successfully" Aug 5 22:45:56.026073 containerd[1460]: time="2024-08-05T22:45:56.025835805Z" level=info msg="RemovePodSandbox for \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\"" Aug 5 22:45:56.026073 containerd[1460]: time="2024-08-05T22:45:56.025877237Z" level=info msg="Forcibly stopping sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\"" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.078 [WARNING][4881] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"385bee24-def7-4848-aef0-c366a7421715", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"f97d1797cbf37e30dc0b708c66a2dc80cad7016677e5449465df39ac730072d9", Pod:"csi-node-driver-f56wl", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.127.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali93e8645ce0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.078 [INFO][4881] k8s.go 608: Cleaning up netns ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.078 [INFO][4881] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" iface="eth0" netns="" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.078 [INFO][4881] k8s.go 615: Releasing IP address(es) ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.078 [INFO][4881] utils.go 188: Calico CNI releasing IP address ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.108 [INFO][4888] ipam_plugin.go 411: Releasing address using handleID ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.109 [INFO][4888] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.109 [INFO][4888] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.118 [WARNING][4888] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.118 [INFO][4888] ipam_plugin.go 439: Releasing address using workloadID ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" HandleID="k8s-pod-network.830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-csi--node--driver--f56wl-eth0" Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.121 [INFO][4888] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:56.125244 containerd[1460]: 2024-08-05 22:45:56.122 [INFO][4881] k8s.go 621: Teardown processing complete. ContainerID="830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3" Aug 5 22:45:56.125244 containerd[1460]: time="2024-08-05T22:45:56.125158110Z" level=info msg="TearDown network for sandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\" successfully" Aug 5 22:45:56.132185 containerd[1460]: time="2024-08-05T22:45:56.132070040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:45:56.132185 containerd[1460]: time="2024-08-05T22:45:56.132173834Z" level=info msg="RemovePodSandbox \"830f0e629a1459e7396e98e1f06d7ad51da321b09c2c7f8795767c4c82dfc8e3\" returns successfully" Aug 5 22:45:56.132892 containerd[1460]: time="2024-08-05T22:45:56.132855250Z" level=info msg="StopPodSandbox for \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\"" Aug 5 22:45:56.169907 sshd[4834]: Accepted publickey for core from 139.178.68.195 port 39070 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:45:56.172747 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:45:56.185991 systemd-logind[1449]: New session 10 of user core. Aug 5 22:45:56.189746 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.206 [WARNING][4906] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cb01de05-8660-4e6b-aa1d-bd76289877a7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606", Pod:"coredns-7db6d8ff4d-5pqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857a8b38439", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.207 [INFO][4906] k8s.go 608: Cleaning up netns ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.207 [INFO][4906] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" iface="eth0" netns="" Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.207 [INFO][4906] k8s.go 615: Releasing IP address(es) ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.207 [INFO][4906] utils.go 188: Calico CNI releasing IP address ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.236 [INFO][4913] ipam_plugin.go 411: Releasing address using handleID ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.236 [INFO][4913] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.236 [INFO][4913] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.245 [WARNING][4913] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.245 [INFO][4913] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.247 [INFO][4913] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:56.250022 containerd[1460]: 2024-08-05 22:45:56.248 [INFO][4906] k8s.go 621: Teardown processing complete. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.251254 containerd[1460]: time="2024-08-05T22:45:56.250065590Z" level=info msg="TearDown network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\" successfully" Aug 5 22:45:56.251254 containerd[1460]: time="2024-08-05T22:45:56.250101779Z" level=info msg="StopPodSandbox for \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\" returns successfully" Aug 5 22:45:56.251254 containerd[1460]: time="2024-08-05T22:45:56.250751207Z" level=info msg="RemovePodSandbox for \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\"" Aug 5 22:45:56.251254 containerd[1460]: time="2024-08-05T22:45:56.250794198Z" level=info msg="Forcibly stopping sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\"" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.299 [WARNING][4931] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cb01de05-8660-4e6b-aa1d-bd76289877a7", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"fe74292171c6961e45311c890aebb277588e8582cbcfa5127f88cfef81873606", Pod:"coredns-7db6d8ff4d-5pqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali857a8b38439", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.299 [INFO][4931] k8s.go 608: Cleaning up netns ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.300 [INFO][4931] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" iface="eth0" netns="" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.300 [INFO][4931] k8s.go 615: Releasing IP address(es) ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.300 [INFO][4931] utils.go 188: Calico CNI releasing IP address ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.328 [INFO][4937] ipam_plugin.go 411: Releasing address using handleID ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.328 [INFO][4937] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.328 [INFO][4937] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.337 [WARNING][4937] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.339 [INFO][4937] ipam_plugin.go 439: Releasing address using workloadID ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" HandleID="k8s-pod-network.de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--5pqbg-eth0" Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.344 [INFO][4937] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:56.351038 containerd[1460]: 2024-08-05 22:45:56.348 [INFO][4931] k8s.go 621: Teardown processing complete. ContainerID="de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405" Aug 5 22:45:56.352076 containerd[1460]: time="2024-08-05T22:45:56.351205461Z" level=info msg="TearDown network for sandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\" successfully" Aug 5 22:45:56.357854 containerd[1460]: time="2024-08-05T22:45:56.357744543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:45:56.358590 containerd[1460]: time="2024-08-05T22:45:56.357889235Z" level=info msg="RemovePodSandbox \"de6d864793a511ad14d4d433a1f3edd78402590ae807de44c2572ec16d2d5405\" returns successfully" Aug 5 22:45:56.359240 containerd[1460]: time="2024-08-05T22:45:56.359162084Z" level=info msg="StopPodSandbox for \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\"" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.440 [WARNING][4963] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7539f340-cbde-47bc-b2a5-8717e2e430a7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c", Pod:"coredns-7db6d8ff4d-8cbg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa6876369b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.441 [INFO][4963] k8s.go 608: Cleaning up netns ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.441 [INFO][4963] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" iface="eth0" netns="" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.441 [INFO][4963] k8s.go 615: Releasing IP address(es) ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.441 [INFO][4963] utils.go 188: Calico CNI releasing IP address ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.497 [INFO][4969] ipam_plugin.go 411: Releasing address using handleID ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.497 [INFO][4969] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.497 [INFO][4969] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.509 [WARNING][4969] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.509 [INFO][4969] ipam_plugin.go 439: Releasing address using workloadID ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.513 [INFO][4969] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:56.517267 containerd[1460]: 2024-08-05 22:45:56.515 [INFO][4963] k8s.go 621: Teardown processing complete. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.518283 containerd[1460]: time="2024-08-05T22:45:56.517294353Z" level=info msg="TearDown network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\" successfully" Aug 5 22:45:56.518283 containerd[1460]: time="2024-08-05T22:45:56.517346848Z" level=info msg="StopPodSandbox for \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\" returns successfully" Aug 5 22:45:56.518387 containerd[1460]: time="2024-08-05T22:45:56.518298341Z" level=info msg="RemovePodSandbox for \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\"" Aug 5 22:45:56.518387 containerd[1460]: time="2024-08-05T22:45:56.518342983Z" level=info msg="Forcibly stopping sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\"" Aug 5 22:45:56.540825 sshd[4834]: pam_unix(sshd:session): session closed for user core Aug 5 22:45:56.557472 systemd[1]: sshd@9-10.128.0.28:22-139.178.68.195:39070.service: Deactivated successfully. Aug 5 22:45:56.565148 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:45:56.573495 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:45:56.576848 systemd-logind[1449]: Removed session 10. Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.614 [WARNING][4988] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7539f340-cbde-47bc-b2a5-8717e2e430a7", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 45, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"fbca2995c5b3ff7e5c0ebc9341a4dda3914c14630bcfd89c60b59dfe4c3fdc6c", Pod:"coredns-7db6d8ff4d-8cbg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa6876369b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.615 [INFO][4988] k8s.go 608: Cleaning up netns ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.615 [INFO][4988] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" iface="eth0" netns="" Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.615 [INFO][4988] k8s.go 615: Releasing IP address(es) ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.615 [INFO][4988] utils.go 188: Calico CNI releasing IP address ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.659 [INFO][4997] ipam_plugin.go 411: Releasing address using handleID ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.659 [INFO][4997] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.659 [INFO][4997] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.666 [WARNING][4997] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.666 [INFO][4997] ipam_plugin.go 439: Releasing address using workloadID ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" HandleID="k8s-pod-network.402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--8cbg5-eth0" Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.668 [INFO][4997] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:45:56.671049 containerd[1460]: 2024-08-05 22:45:56.669 [INFO][4988] k8s.go 621: Teardown processing complete. ContainerID="402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2" Aug 5 22:45:56.672341 containerd[1460]: time="2024-08-05T22:45:56.671146048Z" level=info msg="TearDown network for sandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\" successfully" Aug 5 22:45:56.676848 containerd[1460]: time="2024-08-05T22:45:56.676779579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:45:56.677025 containerd[1460]: time="2024-08-05T22:45:56.676886116Z" level=info msg="RemovePodSandbox \"402372377521cd6f7ef3a3d6e5d29bb6a30d8fbb2401d12e9bd8338b05b394d2\" returns successfully" Aug 5 22:45:56.677708 containerd[1460]: time="2024-08-05T22:45:56.677603487Z" level=info msg="StopPodSandbox for \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\"" Aug 5 22:45:56.677884 containerd[1460]: time="2024-08-05T22:45:56.677749813Z" level=info msg="TearDown network for sandbox \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" successfully" Aug 5 22:45:56.677884 containerd[1460]: time="2024-08-05T22:45:56.677771098Z" level=info msg="StopPodSandbox for \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" returns successfully" Aug 5 22:45:56.678509 containerd[1460]: time="2024-08-05T22:45:56.678396590Z" level=info msg="RemovePodSandbox for \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\"" Aug 5 22:45:56.678722 containerd[1460]: time="2024-08-05T22:45:56.678687830Z" level=info msg="Forcibly stopping sandbox \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\"" Aug 5 22:45:56.678866 containerd[1460]: time="2024-08-05T22:45:56.678799467Z" level=info msg="TearDown network for sandbox \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" successfully" Aug 5 22:45:56.685080 containerd[1460]: time="2024-08-05T22:45:56.685021177Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:45:56.685243 containerd[1460]: time="2024-08-05T22:45:56.685123523Z" level=info msg="RemovePodSandbox \"949fd236e7cb71fcc470b99330b80366c9bc88a1187b95ca5ba077840bfaf316\" returns successfully" Aug 5 22:45:56.685756 containerd[1460]: time="2024-08-05T22:45:56.685722024Z" level=info msg="StopPodSandbox for \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\"" Aug 5 22:45:56.685961 containerd[1460]: time="2024-08-05T22:45:56.685843094Z" level=info msg="TearDown network for sandbox \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" successfully" Aug 5 22:45:56.685961 containerd[1460]: time="2024-08-05T22:45:56.685865149Z" level=info msg="StopPodSandbox for \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" returns successfully" Aug 5 22:45:56.686294 containerd[1460]: time="2024-08-05T22:45:56.686252339Z" level=info msg="RemovePodSandbox for \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\"" Aug 5 22:45:56.686294 containerd[1460]: time="2024-08-05T22:45:56.686289706Z" level=info msg="Forcibly stopping sandbox \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\"" Aug 5 22:45:56.686443 containerd[1460]: time="2024-08-05T22:45:56.686368353Z" level=info msg="TearDown network for sandbox \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" successfully" Aug 5 22:45:56.691230 containerd[1460]: time="2024-08-05T22:45:56.691170855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:45:56.691497 containerd[1460]: time="2024-08-05T22:45:56.691274667Z" level=info msg="RemovePodSandbox \"2f7cbedb8599a17782ed45c4a11136bebc32554a4afba5a5eb88eb0c154d4088\" returns successfully" Aug 5 22:46:01.595983 systemd[1]: Started sshd@10-10.128.0.28:22-139.178.68.195:35976.service - OpenSSH per-connection server daemon (139.178.68.195:35976). Aug 5 22:46:01.893836 sshd[5058]: Accepted publickey for core from 139.178.68.195 port 35976 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:01.895952 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:01.901970 systemd-logind[1449]: New session 11 of user core. Aug 5 22:46:01.909819 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:46:02.187258 sshd[5058]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:02.194011 systemd[1]: sshd@10-10.128.0.28:22-139.178.68.195:35976.service: Deactivated successfully. Aug 5 22:46:02.197096 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:46:02.198243 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:46:02.200170 systemd-logind[1449]: Removed session 11. Aug 5 22:46:07.247931 systemd[1]: Started sshd@11-10.128.0.28:22-139.178.68.195:35982.service - OpenSSH per-connection server daemon (139.178.68.195:35982). Aug 5 22:46:07.542286 sshd[5076]: Accepted publickey for core from 139.178.68.195 port 35982 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:07.544382 sshd[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:07.551572 systemd-logind[1449]: New session 12 of user core. Aug 5 22:46:07.557735 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:46:07.835297 sshd[5076]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:07.841321 systemd[1]: sshd@11-10.128.0.28:22-139.178.68.195:35982.service: Deactivated successfully. Aug 5 22:46:07.845035 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:46:07.847859 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:46:07.849649 systemd-logind[1449]: Removed session 12. Aug 5 22:46:07.893919 systemd[1]: Started sshd@12-10.128.0.28:22-139.178.68.195:35992.service - OpenSSH per-connection server daemon (139.178.68.195:35992). Aug 5 22:46:08.189158 sshd[5090]: Accepted publickey for core from 139.178.68.195 port 35992 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:08.191342 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:08.198166 systemd-logind[1449]: New session 13 of user core. Aug 5 22:46:08.204800 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:46:08.528911 sshd[5090]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:08.535748 systemd[1]: sshd@12-10.128.0.28:22-139.178.68.195:35992.service: Deactivated successfully. Aug 5 22:46:08.538661 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:46:08.539790 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:46:08.541783 systemd-logind[1449]: Removed session 13. Aug 5 22:46:08.584990 systemd[1]: Started sshd@13-10.128.0.28:22-139.178.68.195:35998.service - OpenSSH per-connection server daemon (139.178.68.195:35998). Aug 5 22:46:08.886630 sshd[5100]: Accepted publickey for core from 139.178.68.195 port 35998 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:08.888704 sshd[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:08.894510 systemd-logind[1449]: New session 14 of user core. Aug 5 22:46:08.903896 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:46:09.186440 sshd[5100]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:09.191905 systemd[1]: sshd@13-10.128.0.28:22-139.178.68.195:35998.service: Deactivated successfully. Aug 5 22:46:09.195304 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:46:09.198281 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:46:09.200717 systemd-logind[1449]: Removed session 14. Aug 5 22:46:14.247292 systemd[1]: Started sshd@14-10.128.0.28:22-139.178.68.195:54300.service - OpenSSH per-connection server daemon (139.178.68.195:54300). Aug 5 22:46:14.550964 sshd[5124]: Accepted publickey for core from 139.178.68.195 port 54300 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:14.553762 sshd[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:14.563075 systemd-logind[1449]: New session 15 of user core. Aug 5 22:46:14.571818 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:46:14.846794 sshd[5124]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:14.853011 systemd[1]: sshd@14-10.128.0.28:22-139.178.68.195:54300.service: Deactivated successfully. Aug 5 22:46:14.855989 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:46:14.857275 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:46:14.859328 systemd-logind[1449]: Removed session 15. Aug 5 22:46:19.901982 systemd[1]: Started sshd@15-10.128.0.28:22-139.178.68.195:54314.service - OpenSSH per-connection server daemon (139.178.68.195:54314). Aug 5 22:46:20.204097 sshd[5145]: Accepted publickey for core from 139.178.68.195 port 54314 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:20.205979 sshd[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:20.212581 systemd-logind[1449]: New session 16 of user core. Aug 5 22:46:20.218806 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:46:20.558177 sshd[5145]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:20.566547 systemd[1]: sshd@15-10.128.0.28:22-139.178.68.195:54314.service: Deactivated successfully. Aug 5 22:46:20.569651 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:46:20.572529 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:46:20.574087 systemd-logind[1449]: Removed session 16. Aug 5 22:46:25.613951 systemd[1]: Started sshd@16-10.128.0.28:22-139.178.68.195:40824.service - OpenSSH per-connection server daemon (139.178.68.195:40824). Aug 5 22:46:25.913740 sshd[5165]: Accepted publickey for core from 139.178.68.195 port 40824 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:25.915911 sshd[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:25.923910 systemd-logind[1449]: New session 17 of user core. Aug 5 22:46:25.931795 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:46:26.223772 sshd[5165]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:26.230676 systemd[1]: sshd@16-10.128.0.28:22-139.178.68.195:40824.service: Deactivated successfully. Aug 5 22:46:26.234137 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:46:26.235340 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:46:26.237176 systemd-logind[1449]: Removed session 17. Aug 5 22:46:28.336029 systemd[1]: run-containerd-runc-k8s.io-60cfc6b278f2b4a8eaba30fb61689dc01d855833527fc83db2a0677d962c17a9-runc.tFovqc.mount: Deactivated successfully. Aug 5 22:46:31.282439 systemd[1]: Started sshd@17-10.128.0.28:22-139.178.68.195:51108.service - OpenSSH per-connection server daemon (139.178.68.195:51108). Aug 5 22:46:31.586513 sshd[5220]: Accepted publickey for core from 139.178.68.195 port 51108 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:31.588614 sshd[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:31.595580 systemd-logind[1449]: New session 18 of user core. Aug 5 22:46:31.601821 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:46:31.879242 sshd[5220]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:31.884248 systemd[1]: sshd@17-10.128.0.28:22-139.178.68.195:51108.service: Deactivated successfully. Aug 5 22:46:31.887213 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:46:31.890380 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:46:31.892314 systemd-logind[1449]: Removed session 18. Aug 5 22:46:31.935614 systemd[1]: Started sshd@18-10.128.0.28:22-139.178.68.195:51114.service - OpenSSH per-connection server daemon (139.178.68.195:51114). Aug 5 22:46:32.234260 sshd[5233]: Accepted publickey for core from 139.178.68.195 port 51114 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:32.236376 sshd[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:32.242627 systemd-logind[1449]: New session 19 of user core. Aug 5 22:46:32.250879 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:46:32.616641 sshd[5233]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:32.622892 systemd[1]: sshd@18-10.128.0.28:22-139.178.68.195:51114.service: Deactivated successfully. Aug 5 22:46:32.625869 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:46:32.627400 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:46:32.628973 systemd-logind[1449]: Removed session 19. Aug 5 22:46:32.672954 systemd[1]: Started sshd@19-10.128.0.28:22-139.178.68.195:51120.service - OpenSSH per-connection server daemon (139.178.68.195:51120). Aug 5 22:46:32.960189 sshd[5244]: Accepted publickey for core from 139.178.68.195 port 51120 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:32.962569 sshd[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:32.969627 systemd-logind[1449]: New session 20 of user core. Aug 5 22:46:32.974774 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:46:35.262908 sshd[5244]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:35.273695 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:46:35.274427 systemd[1]: sshd@19-10.128.0.28:22-139.178.68.195:51120.service: Deactivated successfully. Aug 5 22:46:35.280384 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:46:35.287506 systemd-logind[1449]: Removed session 20. Aug 5 22:46:35.325940 systemd[1]: Started sshd@20-10.128.0.28:22-139.178.68.195:51128.service - OpenSSH per-connection server daemon (139.178.68.195:51128). Aug 5 22:46:35.639373 sshd[5264]: Accepted publickey for core from 139.178.68.195 port 51128 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:35.642561 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:35.658306 systemd-logind[1449]: New session 21 of user core. Aug 5 22:46:35.661765 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:46:35.722374 kubelet[2622]: I0805 22:46:35.721928 2622 topology_manager.go:215] "Topology Admit Handler" podUID="40b43cf2-9497-4593-841e-c8954b4db35f" podNamespace="calico-apiserver" podName="calico-apiserver-7f957b979f-2stkb" Aug 5 22:46:35.738272 systemd[1]: Created slice kubepods-besteffort-pod40b43cf2_9497_4593_841e_c8954b4db35f.slice - libcontainer container kubepods-besteffort-pod40b43cf2_9497_4593_841e_c8954b4db35f.slice. Aug 5 22:46:35.770939 kubelet[2622]: I0805 22:46:35.770882 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40b43cf2-9497-4593-841e-c8954b4db35f-calico-apiserver-certs\") pod \"calico-apiserver-7f957b979f-2stkb\" (UID: \"40b43cf2-9497-4593-841e-c8954b4db35f\") " pod="calico-apiserver/calico-apiserver-7f957b979f-2stkb" Aug 5 22:46:35.771169 kubelet[2622]: I0805 22:46:35.770961 2622 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgbs2\" (UniqueName: \"kubernetes.io/projected/40b43cf2-9497-4593-841e-c8954b4db35f-kube-api-access-lgbs2\") pod \"calico-apiserver-7f957b979f-2stkb\" (UID: \"40b43cf2-9497-4593-841e-c8954b4db35f\") " pod="calico-apiserver/calico-apiserver-7f957b979f-2stkb" Aug 5 22:46:35.871800 kubelet[2622]: E0805 22:46:35.871740 2622 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:46:35.872009 kubelet[2622]: E0805 22:46:35.871848 2622 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40b43cf2-9497-4593-841e-c8954b4db35f-calico-apiserver-certs podName:40b43cf2-9497-4593-841e-c8954b4db35f nodeName:}" failed. No retries permitted until 2024-08-05 22:46:36.371823063 +0000 UTC m=+100.989825295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/40b43cf2-9497-4593-841e-c8954b4db35f-calico-apiserver-certs") pod "calico-apiserver-7f957b979f-2stkb" (UID: "40b43cf2-9497-4593-841e-c8954b4db35f") : secret "calico-apiserver-certs" not found Aug 5 22:46:36.256830 sshd[5264]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:36.264393 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:46:36.266015 systemd[1]: sshd@20-10.128.0.28:22-139.178.68.195:51128.service: Deactivated successfully. Aug 5 22:46:36.271767 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:46:36.277546 systemd-logind[1449]: Removed session 21. Aug 5 22:46:36.316083 systemd[1]: Started sshd@21-10.128.0.28:22-139.178.68.195:51142.service - OpenSSH per-connection server daemon (139.178.68.195:51142). Aug 5 22:46:36.625269 sshd[5281]: Accepted publickey for core from 139.178.68.195 port 51142 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:36.627456 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:36.634678 systemd-logind[1449]: New session 22 of user core. Aug 5 22:46:36.640830 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:46:36.647124 containerd[1460]: time="2024-08-05T22:46:36.646388710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f957b979f-2stkb,Uid:40b43cf2-9497-4593-841e-c8954b4db35f,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:46:36.879671 systemd-networkd[1372]: cali35851ad9208: Link UP Aug 5 22:46:36.882845 systemd-networkd[1372]: cali35851ad9208: Gained carrier Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.732 [INFO][5287] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0 calico-apiserver-7f957b979f- calico-apiserver 40b43cf2-9497-4593-841e-c8954b4db35f 1139 0 2024-08-05 22:46:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f957b979f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal calico-apiserver-7f957b979f-2stkb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali35851ad9208 [] []}} ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.732 [INFO][5287] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.779 [INFO][5298] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" HandleID="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.799 [INFO][5298] ipam_plugin.go 264: Auto assigning IP ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" HandleID="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", "pod":"calico-apiserver-7f957b979f-2stkb", "timestamp":"2024-08-05 22:46:36.779863133 +0000 UTC"}, Hostname:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.799 [INFO][5298] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.799 [INFO][5298] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.799 [INFO][5298] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal' Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.805 [INFO][5298] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.812 [INFO][5298] ipam.go 372: Looking up existing affinities for host host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.830 [INFO][5298] ipam.go 489: Trying affinity for 192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.834 [INFO][5298] ipam.go 155: Attempting to load block cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.840 [INFO][5298] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.127.64/26 host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.840 [INFO][5298] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.127.64/26 handle="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.843 [INFO][5298] ipam.go 1685: Creating new handle: k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8 Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.850 [INFO][5298] ipam.go 1203: Writing block in order to claim IPs block=192.168.127.64/26 handle="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.861 [INFO][5298] ipam.go 1216: Successfully claimed IPs: [192.168.127.69/26] block=192.168.127.64/26 handle="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.862 [INFO][5298] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.127.69/26] handle="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" host="ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal" Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.862 [INFO][5298] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:46:36.916711 containerd[1460]: 2024-08-05 22:46:36.862 [INFO][5298] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.127.69/26] IPv6=[] ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" HandleID="k8s-pod-network.c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Workload="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" Aug 5 22:46:36.919288 containerd[1460]: 2024-08-05 22:46:36.868 [INFO][5287] k8s.go 386: Populated endpoint ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0", GenerateName:"calico-apiserver-7f957b979f-", Namespace:"calico-apiserver", SelfLink:"", UID:"40b43cf2-9497-4593-841e-c8954b4db35f", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 46, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f957b979f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7f957b979f-2stkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35851ad9208", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:46:36.919288 containerd[1460]: 2024-08-05 22:46:36.868 [INFO][5287] k8s.go 387: Calico CNI using IPs: [192.168.127.69/32] ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" Aug 5 22:46:36.919288 containerd[1460]: 2024-08-05 22:46:36.869 [INFO][5287] dataplane_linux.go 68: Setting the host side veth name to cali35851ad9208 ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" Aug 5 22:46:36.919288 containerd[1460]: 2024-08-05 22:46:36.881 [INFO][5287] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" Aug 5 22:46:36.919288 containerd[1460]: 2024-08-05 22:46:36.884 [INFO][5287] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0", GenerateName:"calico-apiserver-7f957b979f-", Namespace:"calico-apiserver", SelfLink:"", UID:"40b43cf2-9497-4593-841e-c8954b4db35f", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 46, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f957b979f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012-1-0-5ef6fac5585b3d71d173.c.flatcar-212911.internal", ContainerID:"c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8", Pod:"calico-apiserver-7f957b979f-2stkb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali35851ad9208", MAC:"b2:a0:2c:68:84:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:46:36.919288 containerd[1460]: 2024-08-05 22:46:36.905 [INFO][5287] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8" Namespace="calico-apiserver" Pod="calico-apiserver-7f957b979f-2stkb" WorkloadEndpoint="ci--4012--1--0--5ef6fac5585b3d71d173.c.flatcar--212911.internal-k8s-calico--apiserver--7f957b979f--2stkb-eth0" Aug 5 22:46:37.006067 containerd[1460]: time="2024-08-05T22:46:37.005889871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:46:37.006067 containerd[1460]: time="2024-08-05T22:46:37.006001250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:46:37.007603 containerd[1460]: time="2024-08-05T22:46:37.006036596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:46:37.007603 containerd[1460]: time="2024-08-05T22:46:37.006059856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:46:37.020314 sshd[5281]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:37.033218 systemd[1]: sshd@21-10.128.0.28:22-139.178.68.195:51142.service: Deactivated successfully. Aug 5 22:46:37.039845 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:46:37.044014 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:46:37.050959 systemd-logind[1449]: Removed session 22. Aug 5 22:46:37.078805 systemd[1]: Started cri-containerd-c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8.scope - libcontainer container c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8. Aug 5 22:46:37.155547 containerd[1460]: time="2024-08-05T22:46:37.154433122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f957b979f-2stkb,Uid:40b43cf2-9497-4593-841e-c8954b4db35f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8\"" Aug 5 22:46:37.159031 containerd[1460]: time="2024-08-05T22:46:37.157187937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:46:38.904918 systemd-networkd[1372]: cali35851ad9208: Gained IPv6LL Aug 5 22:46:39.279884 containerd[1460]: time="2024-08-05T22:46:39.279710928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:46:39.284646 containerd[1460]: time="2024-08-05T22:46:39.284542224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:46:39.285992 containerd[1460]: time="2024-08-05T22:46:39.285913642Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:46:39.291947 containerd[1460]: time="2024-08-05T22:46:39.291776552Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:46:39.293521 containerd[1460]: time="2024-08-05T22:46:39.292949994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.134361242s" Aug 5 22:46:39.293521 containerd[1460]: time="2024-08-05T22:46:39.293005992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:46:39.296645 containerd[1460]: time="2024-08-05T22:46:39.296454584Z" level=info msg="CreateContainer within sandbox \"c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:46:39.316845 containerd[1460]: time="2024-08-05T22:46:39.316779456Z" level=info msg="CreateContainer within sandbox \"c6b91ff75ffa0015b2cbf63bcb9fdaeeb1e1fcce09ed3cea5fdfc186095bf6d8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"199d28e7a111da81c48a540e00b36fbc684476c94ddd9e131d4c05371498966d\"" Aug 5 22:46:39.317832 containerd[1460]: time="2024-08-05T22:46:39.317786832Z" level=info msg="StartContainer for \"199d28e7a111da81c48a540e00b36fbc684476c94ddd9e131d4c05371498966d\"" Aug 5 22:46:39.374817 systemd[1]: Started cri-containerd-199d28e7a111da81c48a540e00b36fbc684476c94ddd9e131d4c05371498966d.scope - libcontainer container 199d28e7a111da81c48a540e00b36fbc684476c94ddd9e131d4c05371498966d. Aug 5 22:46:39.443790 containerd[1460]: time="2024-08-05T22:46:39.442508695Z" level=info msg="StartContainer for \"199d28e7a111da81c48a540e00b36fbc684476c94ddd9e131d4c05371498966d\" returns successfully" Aug 5 22:46:40.692717 kubelet[2622]: I0805 22:46:40.691422 2622 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f957b979f-2stkb" podStartSLOduration=3.553268348 podStartE2EDuration="5.691396186s" podCreationTimestamp="2024-08-05 22:46:35 +0000 UTC" firstStartedPulling="2024-08-05 22:46:37.156165994 +0000 UTC m=+101.774168228" lastFinishedPulling="2024-08-05 22:46:39.294293832 +0000 UTC m=+103.912296066" observedRunningTime="2024-08-05 22:46:40.131633641 +0000 UTC m=+104.749635887" watchObservedRunningTime="2024-08-05 22:46:40.691396186 +0000 UTC m=+105.309398431" Aug 5 22:46:41.645808 ntpd[1428]: Listen normally on 13 cali35851ad9208 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:46:41.646319 ntpd[1428]: 5 Aug 22:46:41 ntpd[1428]: Listen normally on 13 cali35851ad9208 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:46:42.075198 systemd[1]: Started sshd@22-10.128.0.28:22-139.178.68.195:49730.service - OpenSSH per-connection server daemon (139.178.68.195:49730). Aug 5 22:46:42.373755 sshd[5421]: Accepted publickey for core from 139.178.68.195 port 49730 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:42.375910 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:42.382665 systemd-logind[1449]: New session 23 of user core. Aug 5 22:46:42.387749 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:46:42.661665 sshd[5421]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:42.667664 systemd[1]: sshd@22-10.128.0.28:22-139.178.68.195:49730.service: Deactivated successfully. Aug 5 22:46:42.670903 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:46:42.672409 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:46:42.674044 systemd-logind[1449]: Removed session 23. Aug 5 22:46:47.723776 systemd[1]: Started sshd@23-10.128.0.28:22-139.178.68.195:49732.service - OpenSSH per-connection server daemon (139.178.68.195:49732). Aug 5 22:46:48.039627 sshd[5469]: Accepted publickey for core from 139.178.68.195 port 49732 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:48.042050 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:48.056797 systemd-logind[1449]: New session 24 of user core. Aug 5 22:46:48.061733 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:46:48.374948 sshd[5469]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:48.382794 systemd[1]: sshd@23-10.128.0.28:22-139.178.68.195:49732.service: Deactivated successfully. Aug 5 22:46:48.389483 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:46:48.391717 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:46:48.394807 systemd-logind[1449]: Removed session 24. Aug 5 22:46:53.435906 systemd[1]: Started sshd@24-10.128.0.28:22-139.178.68.195:53358.service - OpenSSH per-connection server daemon (139.178.68.195:53358). Aug 5 22:46:53.758324 sshd[5482]: Accepted publickey for core from 139.178.68.195 port 53358 ssh2: RSA SHA256:ZpqyhMyoOF657ZX4PnVZ8/cu5Rr+dM587JPiG744NK0 Aug 5 22:46:53.760208 sshd[5482]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:46:53.769362 systemd-logind[1449]: New session 25 of user core. Aug 5 22:46:53.776749 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:46:54.049764 sshd[5482]: pam_unix(sshd:session): session closed for user core Aug 5 22:46:54.055739 systemd[1]: sshd@24-10.128.0.28:22-139.178.68.195:53358.service: Deactivated successfully. Aug 5 22:46:54.058338 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:46:54.059545 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:46:54.060942 systemd-logind[1449]: Removed session 25.