Oct 9 01:04:49.891919 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Oct 8 23:33:43 -00 2024 Oct 9 01:04:49.891939 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:04:49.891947 kernel: BIOS-provided physical RAM map: Oct 9 01:04:49.891953 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 9 01:04:49.891958 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 9 01:04:49.891963 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 9 01:04:49.891969 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Oct 9 01:04:49.892914 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Oct 9 01:04:49.892927 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 9 01:04:49.892932 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 9 01:04:49.892938 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 9 01:04:49.892943 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 9 01:04:49.892948 kernel: NX (Execute Disable) protection: active Oct 9 01:04:49.892953 kernel: APIC: Static calls initialized Oct 9 01:04:49.892962 kernel: SMBIOS 2.8 present. Oct 9 01:04:49.892968 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Oct 9 01:04:49.892986 kernel: Hypervisor detected: KVM Oct 9 01:04:49.892991 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 9 01:04:49.893008 kernel: kvm-clock: using sched offset of 2913113536 cycles Oct 9 01:04:49.893014 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 9 01:04:49.893020 kernel: tsc: Detected 2445.404 MHz processor Oct 9 01:04:49.893025 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 9 01:04:49.893031 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 9 01:04:49.893040 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Oct 9 01:04:49.893046 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 9 01:04:49.893051 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 9 01:04:49.893057 kernel: Using GB pages for direct mapping Oct 9 01:04:49.893062 kernel: ACPI: Early table checksum verification disabled Oct 9 01:04:49.893068 kernel: ACPI: RSDP 0x00000000000F51F0 000014 (v00 BOCHS ) Oct 9 01:04:49.893073 kernel: ACPI: RSDT 0x000000007CFE265D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:04:49.893079 kernel: ACPI: FACP 0x000000007CFE244D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:04:49.893084 kernel: ACPI: DSDT 0x000000007CFE0040 00240D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:04:49.893092 kernel: ACPI: FACS 0x000000007CFE0000 000040 Oct 9 01:04:49.893098 kernel: ACPI: APIC 0x000000007CFE2541 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:04:49.893103 kernel: ACPI: HPET 0x000000007CFE25C1 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:04:49.893109 kernel: ACPI: MCFG 0x000000007CFE25F9 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:04:49.893115 kernel: ACPI: WAET 0x000000007CFE2635 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 9 01:04:49.893120 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe244d-0x7cfe2540] Oct 9 01:04:49.893126 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe244c] Oct 9 01:04:49.893131 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Oct 9 01:04:49.893142 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2541-0x7cfe25c0] Oct 9 01:04:49.893148 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25c1-0x7cfe25f8] Oct 9 01:04:49.893154 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe25f9-0x7cfe2634] Oct 9 01:04:49.893160 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe2635-0x7cfe265c] Oct 9 01:04:49.893165 kernel: No NUMA configuration found Oct 9 01:04:49.893171 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Oct 9 01:04:49.893179 kernel: NODE_DATA(0) allocated [mem 0x7cfd6000-0x7cfdbfff] Oct 9 01:04:49.893185 kernel: Zone ranges: Oct 9 01:04:49.893191 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 9 01:04:49.893197 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Oct 9 01:04:49.893202 kernel: Normal empty Oct 9 01:04:49.893208 kernel: Movable zone start for each node Oct 9 01:04:49.893214 kernel: Early memory node ranges Oct 9 01:04:49.893220 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 9 01:04:49.893226 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Oct 9 01:04:49.893231 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Oct 9 01:04:49.893239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 9 01:04:49.893245 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 9 01:04:49.893251 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 9 01:04:49.893256 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 9 01:04:49.893262 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 9 01:04:49.893268 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 9 01:04:49.893274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 9 01:04:49.893279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 9 01:04:49.893285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 9 01:04:49.893293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 9 01:04:49.893299 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 9 01:04:49.893304 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 9 01:04:49.893310 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 9 01:04:49.893316 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 9 01:04:49.893322 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 9 01:04:49.893327 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 9 01:04:49.893333 kernel: Booting paravirtualized kernel on KVM Oct 9 01:04:49.893339 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 9 01:04:49.893347 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Oct 9 01:04:49.893353 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Oct 9 01:04:49.893359 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Oct 9 01:04:49.893364 kernel: pcpu-alloc: [0] 0 1 Oct 9 01:04:49.893370 kernel: kvm-guest: PV spinlocks disabled, no host support Oct 9 01:04:49.893377 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:04:49.893383 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 9 01:04:49.893389 kernel: random: crng init done Oct 9 01:04:49.893396 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 9 01:04:49.893402 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 9 01:04:49.893408 kernel: Fallback order for Node 0: 0 Oct 9 01:04:49.893414 kernel: Built 1 zonelists, mobility grouping on. Total pages: 503708 Oct 9 01:04:49.893420 kernel: Policy zone: DMA32 Oct 9 01:04:49.893425 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 9 01:04:49.893431 kernel: Memory: 1922056K/2047464K available (12288K kernel code, 2305K rwdata, 22728K rodata, 42872K init, 2316K bss, 125148K reserved, 0K cma-reserved) Oct 9 01:04:49.893437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 9 01:04:49.893443 kernel: ftrace: allocating 37786 entries in 148 pages Oct 9 01:04:49.893451 kernel: ftrace: allocated 148 pages with 3 groups Oct 9 01:04:49.893457 kernel: Dynamic Preempt: voluntary Oct 9 01:04:49.893462 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 9 01:04:49.893469 kernel: rcu: RCU event tracing is enabled. Oct 9 01:04:49.893475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 9 01:04:49.893481 kernel: Trampoline variant of Tasks RCU enabled. Oct 9 01:04:49.893487 kernel: Rude variant of Tasks RCU enabled. Oct 9 01:04:49.893493 kernel: Tracing variant of Tasks RCU enabled. Oct 9 01:04:49.893499 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 9 01:04:49.893504 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 9 01:04:49.893512 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 9 01:04:49.893518 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 9 01:04:49.893524 kernel: Console: colour VGA+ 80x25 Oct 9 01:04:49.893530 kernel: printk: console [tty0] enabled Oct 9 01:04:49.893535 kernel: printk: console [ttyS0] enabled Oct 9 01:04:49.893541 kernel: ACPI: Core revision 20230628 Oct 9 01:04:49.893547 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 9 01:04:49.893553 kernel: APIC: Switch to symmetric I/O mode setup Oct 9 01:04:49.893559 kernel: x2apic enabled Oct 9 01:04:49.893566 kernel: APIC: Switched APIC routing to: physical x2apic Oct 9 01:04:49.893573 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 9 01:04:49.893579 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Oct 9 01:04:49.893585 kernel: Calibrating delay loop (skipped) preset value.. 4890.80 BogoMIPS (lpj=2445404) Oct 9 01:04:49.893590 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 9 01:04:49.893596 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 9 01:04:49.893602 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 9 01:04:49.893608 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 9 01:04:49.893622 kernel: Spectre V2 : Mitigation: Retpolines Oct 9 01:04:49.893628 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 9 01:04:49.893634 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 9 01:04:49.893642 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 9 01:04:49.893648 kernel: RETBleed: Mitigation: untrained return thunk Oct 9 01:04:49.893655 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 9 01:04:49.893661 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 9 01:04:49.893667 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 9 01:04:49.893673 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 9 01:04:49.893679 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 9 01:04:49.893686 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 9 01:04:49.893694 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 9 01:04:49.893700 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 9 01:04:49.893706 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 9 01:04:49.893712 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 9 01:04:49.893718 kernel: Freeing SMP alternatives memory: 32K Oct 9 01:04:49.893726 kernel: pid_max: default: 32768 minimum: 301 Oct 9 01:04:49.893732 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 9 01:04:49.893738 kernel: landlock: Up and running. Oct 9 01:04:49.893744 kernel: SELinux: Initializing. Oct 9 01:04:49.893750 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:04:49.893756 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 9 01:04:49.893762 kernel: smpboot: CPU0: AMD EPYC Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 9 01:04:49.893768 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:04:49.893775 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:04:49.893783 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 9 01:04:49.893789 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 9 01:04:49.893795 kernel: ... version: 0 Oct 9 01:04:49.893801 kernel: ... bit width: 48 Oct 9 01:04:49.893807 kernel: ... generic registers: 6 Oct 9 01:04:49.893813 kernel: ... value mask: 0000ffffffffffff Oct 9 01:04:49.893819 kernel: ... max period: 00007fffffffffff Oct 9 01:04:49.893825 kernel: ... fixed-purpose events: 0 Oct 9 01:04:49.893831 kernel: ... event mask: 000000000000003f Oct 9 01:04:49.893837 kernel: signal: max sigframe size: 1776 Oct 9 01:04:49.893845 kernel: rcu: Hierarchical SRCU implementation. Oct 9 01:04:49.893851 kernel: rcu: Max phase no-delay instances is 400. Oct 9 01:04:49.893857 kernel: smp: Bringing up secondary CPUs ... Oct 9 01:04:49.893863 kernel: smpboot: x86: Booting SMP configuration: Oct 9 01:04:49.893869 kernel: .... node #0, CPUs: #1 Oct 9 01:04:49.893875 kernel: smp: Brought up 1 node, 2 CPUs Oct 9 01:04:49.893881 kernel: smpboot: Max logical packages: 1 Oct 9 01:04:49.893887 kernel: smpboot: Total of 2 processors activated (9781.61 BogoMIPS) Oct 9 01:04:49.893893 kernel: devtmpfs: initialized Oct 9 01:04:49.893901 kernel: x86/mm: Memory block size: 128MB Oct 9 01:04:49.893908 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 9 01:04:49.893914 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 9 01:04:49.893920 kernel: pinctrl core: initialized pinctrl subsystem Oct 9 01:04:49.893926 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 9 01:04:49.893932 kernel: audit: initializing netlink subsys (disabled) Oct 9 01:04:49.893938 kernel: audit: type=2000 audit(1728435888.746:1): state=initialized audit_enabled=0 res=1 Oct 9 01:04:49.893944 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 9 01:04:49.893950 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 9 01:04:49.893958 kernel: cpuidle: using governor menu Oct 9 01:04:49.893964 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 9 01:04:49.893970 kernel: dca service started, version 1.12.1 Oct 9 01:04:49.893998 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 9 01:04:49.894014 kernel: PCI: Using configuration type 1 for base access Oct 9 01:04:49.894020 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 9 01:04:49.894027 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 9 01:04:49.894033 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 9 01:04:49.894047 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 9 01:04:49.894069 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 9 01:04:49.894086 kernel: ACPI: Added _OSI(Module Device) Oct 9 01:04:49.894103 kernel: ACPI: Added _OSI(Processor Device) Oct 9 01:04:49.894123 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 9 01:04:49.894139 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 9 01:04:49.894153 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 9 01:04:49.894159 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 9 01:04:49.894187 kernel: ACPI: Interpreter enabled Oct 9 01:04:49.894194 kernel: ACPI: PM: (supports S0 S5) Oct 9 01:04:49.894203 kernel: ACPI: Using IOAPIC for interrupt routing Oct 9 01:04:49.894209 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 9 01:04:49.894215 kernel: PCI: Using E820 reservations for host bridge windows Oct 9 01:04:49.894221 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 9 01:04:49.894227 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 9 01:04:49.894469 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 9 01:04:49.894595 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 9 01:04:49.894746 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 9 01:04:49.894756 kernel: PCI host bridge to bus 0000:00 Oct 9 01:04:49.894874 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 9 01:04:49.895066 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 9 01:04:49.895181 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 9 01:04:49.895286 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Oct 9 01:04:49.895388 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 9 01:04:49.895489 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 9 01:04:49.895596 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 9 01:04:49.895725 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 9 01:04:49.895848 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 Oct 9 01:04:49.895960 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfb800000-0xfbffffff pref] Oct 9 01:04:49.896103 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfd200000-0xfd203fff 64bit pref] Oct 9 01:04:49.897188 kernel: pci 0000:00:01.0: reg 0x20: [mem 0xfea10000-0xfea10fff] Oct 9 01:04:49.897331 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea00000-0xfea0ffff pref] Oct 9 01:04:49.897441 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 9 01:04:49.897555 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.897659 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea11000-0xfea11fff] Oct 9 01:04:49.897771 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.897903 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea12000-0xfea12fff] Oct 9 01:04:49.901098 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.901256 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea13000-0xfea13fff] Oct 9 01:04:49.901373 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.901479 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea14000-0xfea14fff] Oct 9 01:04:49.901617 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.901728 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea15000-0xfea15fff] Oct 9 01:04:49.902070 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.902276 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea16000-0xfea16fff] Oct 9 01:04:49.902403 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.902514 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea17000-0xfea17fff] Oct 9 01:04:49.902632 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.902744 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea18000-0xfea18fff] Oct 9 01:04:49.902866 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 9 01:04:49.903118 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfea19000-0xfea19fff] Oct 9 01:04:49.903244 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 9 01:04:49.903351 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 9 01:04:49.903462 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 9 01:04:49.903566 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc040-0xc05f] Oct 9 01:04:49.903670 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea1a000-0xfea1afff] Oct 9 01:04:49.903817 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 9 01:04:49.903925 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 9 01:04:49.904084 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 01:04:49.904194 kernel: pci 0000:01:00.0: reg 0x14: [mem 0xfe880000-0xfe880fff] Oct 9 01:04:49.904304 kernel: pci 0000:01:00.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 9 01:04:49.904414 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfe800000-0xfe87ffff pref] Oct 9 01:04:49.904526 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 9 01:04:49.904681 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 01:04:49.904825 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 01:04:49.904945 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 9 01:04:49.905609 kernel: pci 0000:02:00.0: reg 0x10: [mem 0xfe600000-0xfe603fff 64bit] Oct 9 01:04:49.905723 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 9 01:04:49.905829 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 01:04:49.905970 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 01:04:49.906220 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 9 01:04:49.906333 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfe400000-0xfe400fff] Oct 9 01:04:49.906441 kernel: pci 0000:03:00.0: reg 0x20: [mem 0xfcc00000-0xfcc03fff 64bit pref] Oct 9 01:04:49.906544 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 9 01:04:49.906646 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 01:04:49.906748 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 01:04:49.906874 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 9 01:04:49.907013 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 9 01:04:49.907123 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 9 01:04:49.907240 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 01:04:49.907361 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 01:04:49.907479 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 9 01:04:49.907588 kernel: pci 0000:05:00.0: reg 0x20: [mem 0xfc800000-0xfc803fff 64bit pref] Oct 9 01:04:49.907722 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 9 01:04:49.907831 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 01:04:49.907935 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 01:04:49.908092 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 9 01:04:49.908232 kernel: pci 0000:06:00.0: reg 0x14: [mem 0xfde00000-0xfde00fff] Oct 9 01:04:49.908343 kernel: pci 0000:06:00.0: reg 0x20: [mem 0xfc600000-0xfc603fff 64bit pref] Oct 9 01:04:49.908446 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 9 01:04:49.908549 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 01:04:49.908659 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 01:04:49.908687 kernel: acpiphp: Slot [0] registered Oct 9 01:04:49.908810 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 9 01:04:49.908920 kernel: pci 0000:07:00.0: reg 0x14: [mem 0xfdc80000-0xfdc80fff] Oct 9 01:04:49.909080 kernel: pci 0000:07:00.0: reg 0x20: [mem 0xfc400000-0xfc403fff 64bit pref] Oct 9 01:04:49.909219 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfdc00000-0xfdc7ffff pref] Oct 9 01:04:49.909325 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 9 01:04:49.909433 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 01:04:49.909536 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 01:04:49.909545 kernel: acpiphp: Slot [0-2] registered Oct 9 01:04:49.909647 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 9 01:04:49.909748 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 9 01:04:49.909876 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 01:04:49.909887 kernel: acpiphp: Slot [0-3] registered Oct 9 01:04:49.911041 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 9 01:04:49.911160 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 01:04:49.911271 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 01:04:49.911280 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 9 01:04:49.911287 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 9 01:04:49.911293 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 9 01:04:49.911299 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 9 01:04:49.911306 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 9 01:04:49.911312 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 9 01:04:49.911318 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 9 01:04:49.911327 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 9 01:04:49.911333 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 9 01:04:49.911339 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 9 01:04:49.911345 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 9 01:04:49.911352 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 9 01:04:49.911358 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 9 01:04:49.911364 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 9 01:04:49.911370 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 9 01:04:49.911377 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 9 01:04:49.911385 kernel: iommu: Default domain type: Translated Oct 9 01:04:49.911391 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 9 01:04:49.911397 kernel: PCI: Using ACPI for IRQ routing Oct 9 01:04:49.911403 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 9 01:04:49.911409 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 9 01:04:49.911416 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Oct 9 01:04:49.911517 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 9 01:04:49.911620 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 9 01:04:49.911722 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 9 01:04:49.911734 kernel: vgaarb: loaded Oct 9 01:04:49.911741 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 9 01:04:49.911748 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 9 01:04:49.911754 kernel: clocksource: Switched to clocksource kvm-clock Oct 9 01:04:49.911760 kernel: VFS: Disk quotas dquot_6.6.0 Oct 9 01:04:49.911766 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 9 01:04:49.911773 kernel: pnp: PnP ACPI init Oct 9 01:04:49.911885 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 9 01:04:49.911898 kernel: pnp: PnP ACPI: found 5 devices Oct 9 01:04:49.911905 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 9 01:04:49.911911 kernel: NET: Registered PF_INET protocol family Oct 9 01:04:49.911917 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 9 01:04:49.911924 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 9 01:04:49.911930 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 9 01:04:49.911936 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 9 01:04:49.911942 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 9 01:04:49.911949 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 9 01:04:49.911957 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:04:49.911963 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 9 01:04:49.911969 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 9 01:04:49.911989 kernel: NET: Registered PF_XDP protocol family Oct 9 01:04:49.912098 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 9 01:04:49.912201 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 9 01:04:49.912306 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 9 01:04:49.912438 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x1000-0x1fff] Oct 9 01:04:49.912552 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x2000-0x2fff] Oct 9 01:04:49.912658 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x3000-0x3fff] Oct 9 01:04:49.912762 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 9 01:04:49.912867 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Oct 9 01:04:49.912971 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 01:04:49.918198 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 9 01:04:49.918312 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Oct 9 01:04:49.918424 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 01:04:49.918528 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 9 01:04:49.918631 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Oct 9 01:04:49.918734 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 01:04:49.918836 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 9 01:04:49.918937 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Oct 9 01:04:49.919059 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 01:04:49.919169 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 9 01:04:49.919289 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Oct 9 01:04:49.919395 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 01:04:49.919497 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 9 01:04:49.919612 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Oct 9 01:04:49.919738 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 01:04:49.919843 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 9 01:04:49.922455 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Oct 9 01:04:49.922583 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Oct 9 01:04:49.922690 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 01:04:49.922820 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 9 01:04:49.922933 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Oct 9 01:04:49.923055 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Oct 9 01:04:49.923159 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 01:04:49.923261 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 9 01:04:49.923364 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Oct 9 01:04:49.923468 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 9 01:04:49.923576 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 01:04:49.923675 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 9 01:04:49.923771 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 9 01:04:49.923865 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 9 01:04:49.923964 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Oct 9 01:04:49.926497 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 9 01:04:49.926654 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 9 01:04:49.926823 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 9 01:04:49.926953 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Oct 9 01:04:49.927113 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 9 01:04:49.927265 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 9 01:04:49.927428 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 9 01:04:49.927588 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 9 01:04:49.927763 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 9 01:04:49.927926 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 9 01:04:49.928113 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 9 01:04:49.928248 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 9 01:04:49.928469 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Oct 9 01:04:49.928606 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 9 01:04:49.928750 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Oct 9 01:04:49.928948 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 9 01:04:49.930200 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 9 01:04:49.930350 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Oct 9 01:04:49.930466 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Oct 9 01:04:49.930594 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 9 01:04:49.930761 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Oct 9 01:04:49.930911 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 9 01:04:49.931123 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 9 01:04:49.931142 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 9 01:04:49.931155 kernel: PCI: CLS 0 bytes, default 64 Oct 9 01:04:49.931171 kernel: Initialise system trusted keyrings Oct 9 01:04:49.931182 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 9 01:04:49.931194 kernel: Key type asymmetric registered Oct 9 01:04:49.931205 kernel: Asymmetric key parser 'x509' registered Oct 9 01:04:49.931216 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 9 01:04:49.931227 kernel: io scheduler mq-deadline registered Oct 9 01:04:49.931238 kernel: io scheduler kyber registered Oct 9 01:04:49.931249 kernel: io scheduler bfq registered Oct 9 01:04:49.931402 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 9 01:04:49.931515 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 9 01:04:49.931629 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 9 01:04:49.931735 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 9 01:04:49.931842 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 9 01:04:49.931946 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 9 01:04:49.932072 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 9 01:04:49.932181 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 9 01:04:49.932290 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 9 01:04:49.932396 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 9 01:04:49.932509 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 9 01:04:49.932614 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 9 01:04:49.932720 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 9 01:04:49.932824 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 9 01:04:49.932932 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 9 01:04:49.933056 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 9 01:04:49.933067 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 9 01:04:49.933173 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Oct 9 01:04:49.933283 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Oct 9 01:04:49.933293 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 9 01:04:49.933300 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Oct 9 01:04:49.933307 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 9 01:04:49.933314 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 9 01:04:49.933321 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 9 01:04:49.933328 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 9 01:04:49.933335 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 9 01:04:49.933342 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 9 01:04:49.933458 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 9 01:04:49.933559 kernel: rtc_cmos 00:03: registered as rtc0 Oct 9 01:04:49.933656 kernel: rtc_cmos 00:03: setting system clock to 2024-10-09T01:04:49 UTC (1728435889) Oct 9 01:04:49.933754 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 9 01:04:49.933763 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 9 01:04:49.933771 kernel: NET: Registered PF_INET6 protocol family Oct 9 01:04:49.933777 kernel: Segment Routing with IPv6 Oct 9 01:04:49.933784 kernel: In-situ OAM (IOAM) with IPv6 Oct 9 01:04:49.933797 kernel: NET: Registered PF_PACKET protocol family Oct 9 01:04:49.933804 kernel: Key type dns_resolver registered Oct 9 01:04:49.933811 kernel: IPI shorthand broadcast: enabled Oct 9 01:04:49.933817 kernel: sched_clock: Marking stable (1120009612, 130891361)->(1262956107, -12055134) Oct 9 01:04:49.933824 kernel: registered taskstats version 1 Oct 9 01:04:49.933831 kernel: Loading compiled-in X.509 certificates Oct 9 01:04:49.933840 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: 03ae66f5ce294ce3ab718ee0d7c4a4a6e8c5aae6' Oct 9 01:04:49.933847 kernel: Key type .fscrypt registered Oct 9 01:04:49.933853 kernel: Key type fscrypt-provisioning registered Oct 9 01:04:49.933863 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 9 01:04:49.933870 kernel: ima: Allocated hash algorithm: sha1 Oct 9 01:04:49.933877 kernel: ima: No architecture policies found Oct 9 01:04:49.933883 kernel: clk: Disabling unused clocks Oct 9 01:04:49.933890 kernel: Freeing unused kernel image (initmem) memory: 42872K Oct 9 01:04:49.933896 kernel: Write protecting the kernel read-only data: 36864k Oct 9 01:04:49.933903 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Oct 9 01:04:49.933910 kernel: Run /init as init process Oct 9 01:04:49.933919 kernel: with arguments: Oct 9 01:04:49.933926 kernel: /init Oct 9 01:04:49.933933 kernel: with environment: Oct 9 01:04:49.933939 kernel: HOME=/ Oct 9 01:04:49.933946 kernel: TERM=linux Oct 9 01:04:49.933952 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 9 01:04:49.933961 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:04:49.933971 systemd[1]: Detected virtualization kvm. Oct 9 01:04:49.934010 systemd[1]: Detected architecture x86-64. Oct 9 01:04:49.934017 systemd[1]: Running in initrd. Oct 9 01:04:49.934024 systemd[1]: No hostname configured, using default hostname. Oct 9 01:04:49.934031 systemd[1]: Hostname set to . Oct 9 01:04:49.934039 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:04:49.934046 systemd[1]: Queued start job for default target initrd.target. Oct 9 01:04:49.934053 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:04:49.934060 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:04:49.934071 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 9 01:04:49.934078 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:04:49.934085 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 9 01:04:49.934093 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 9 01:04:49.934101 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 9 01:04:49.934108 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 9 01:04:49.934115 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:04:49.934125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:04:49.934132 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:04:49.934139 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:04:49.934146 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:04:49.934153 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:04:49.934160 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:04:49.934180 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:04:49.934188 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:04:49.934195 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:04:49.934205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:04:49.934212 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:04:49.934219 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:04:49.934226 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:04:49.934233 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 9 01:04:49.934240 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:04:49.934247 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 9 01:04:49.934254 systemd[1]: Starting systemd-fsck-usr.service... Oct 9 01:04:49.934264 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:04:49.934271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:04:49.934277 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:49.934284 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 9 01:04:49.934314 systemd-journald[187]: Collecting audit messages is disabled. Oct 9 01:04:49.934336 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:04:49.934343 systemd[1]: Finished systemd-fsck-usr.service. Oct 9 01:04:49.934351 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:04:49.934358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:04:49.934368 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 9 01:04:49.934375 kernel: Bridge firewalling registered Oct 9 01:04:49.934382 systemd-journald[187]: Journal started Oct 9 01:04:49.934399 systemd-journald[187]: Runtime Journal (/run/log/journal/3a9ac8a47870457db9631a0644e1baf9) is 4.8M, max 38.4M, 33.6M free. Oct 9 01:04:49.897100 systemd-modules-load[188]: Inserted module 'overlay' Oct 9 01:04:49.964872 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:04:49.928472 systemd-modules-load[188]: Inserted module 'br_netfilter' Oct 9 01:04:49.965600 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:04:49.966469 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:49.978133 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:04:49.980182 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:04:49.982394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:04:49.988289 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:04:49.999551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:04:50.004021 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:04:50.011108 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 9 01:04:50.012809 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:04:50.014315 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:04:50.020946 dracut-cmdline[218]: dracut-dracut-053 Oct 9 01:04:50.025126 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=ecc53326196a1bacd9ba781ce772ef34cdd5fe5561cf830307501ec3d5ba168a Oct 9 01:04:50.024140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:04:50.050886 systemd-resolved[226]: Positive Trust Anchors: Oct 9 01:04:50.050901 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:04:50.050928 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:04:50.057891 systemd-resolved[226]: Defaulting to hostname 'linux'. Oct 9 01:04:50.058967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:04:50.059552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:04:50.095001 kernel: SCSI subsystem initialized Oct 9 01:04:50.103993 kernel: Loading iSCSI transport class v2.0-870. Oct 9 01:04:50.114013 kernel: iscsi: registered transport (tcp) Oct 9 01:04:50.132008 kernel: iscsi: registered transport (qla4xxx) Oct 9 01:04:50.132064 kernel: QLogic iSCSI HBA Driver Oct 9 01:04:50.174913 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 9 01:04:50.187145 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 9 01:04:50.209211 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 9 01:04:50.209274 kernel: device-mapper: uevent: version 1.0.3 Oct 9 01:04:50.209999 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 9 01:04:50.254016 kernel: raid6: avx2x4 gen() 31101 MB/s Oct 9 01:04:50.271003 kernel: raid6: avx2x2 gen() 28390 MB/s Oct 9 01:04:50.288072 kernel: raid6: avx2x1 gen() 22823 MB/s Oct 9 01:04:50.288098 kernel: raid6: using algorithm avx2x4 gen() 31101 MB/s Oct 9 01:04:50.307046 kernel: raid6: .... xor() 4943 MB/s, rmw enabled Oct 9 01:04:50.307110 kernel: raid6: using avx2x2 recovery algorithm Oct 9 01:04:50.326011 kernel: xor: automatically using best checksumming function avx Oct 9 01:04:50.451021 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 9 01:04:50.464037 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:04:50.470136 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:04:50.482216 systemd-udevd[405]: Using default interface naming scheme 'v255'. Oct 9 01:04:50.485900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:04:50.496145 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 9 01:04:50.510301 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Oct 9 01:04:50.541627 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:04:50.550097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:04:50.611528 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:04:50.622213 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 9 01:04:50.641165 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 9 01:04:50.642393 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:04:50.642864 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:04:50.646068 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:04:50.651156 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 9 01:04:50.671124 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:04:50.699995 kernel: cryptd: max_cpu_qlen set to 1000 Oct 9 01:04:50.703050 kernel: scsi host0: Virtio SCSI HBA Oct 9 01:04:50.716275 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 9 01:04:50.726029 kernel: AVX2 version of gcm_enc/dec engaged. Oct 9 01:04:50.727992 kernel: AES CTR mode by8 optimization enabled Oct 9 01:04:50.730000 kernel: ACPI: bus type USB registered Oct 9 01:04:50.730329 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:04:50.797097 kernel: usbcore: registered new interface driver usbfs Oct 9 01:04:50.797128 kernel: usbcore: registered new interface driver hub Oct 9 01:04:50.797142 kernel: usbcore: registered new device driver usb Oct 9 01:04:50.781145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:04:50.793302 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:04:50.793897 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:04:50.794102 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:50.794722 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:50.809059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:50.838001 kernel: libata version 3.00 loaded. Oct 9 01:04:50.860997 kernel: ahci 0000:00:1f.2: version 3.0 Oct 9 01:04:50.861217 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 9 01:04:50.862542 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 9 01:04:50.862752 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 9 01:04:50.867998 kernel: sd 0:0:0:0: Power-on or device reset occurred Oct 9 01:04:50.868196 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 9 01:04:50.868335 kernel: sd 0:0:0:0: [sda] Write Protect is off Oct 9 01:04:50.868467 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Oct 9 01:04:50.868597 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 9 01:04:50.871919 kernel: scsi host1: ahci Oct 9 01:04:50.872613 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 9 01:04:50.872625 kernel: GPT:17805311 != 80003071 Oct 9 01:04:50.872634 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 9 01:04:50.872643 kernel: GPT:17805311 != 80003071 Oct 9 01:04:50.872651 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 9 01:04:50.872659 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:04:50.872667 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Oct 9 01:04:50.872842 kernel: scsi host2: ahci Oct 9 01:04:50.889207 kernel: scsi host3: ahci Oct 9 01:04:50.889247 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 01:04:50.891664 kernel: scsi host4: ahci Oct 9 01:04:50.891828 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 9 01:04:50.891969 kernel: scsi host5: ahci Oct 9 01:04:50.892023 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 9 01:04:50.895282 kernel: scsi host6: ahci Oct 9 01:04:50.895317 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 9 01:04:50.899430 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 46 Oct 9 01:04:50.899451 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 9 01:04:50.899594 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 9 01:04:50.899721 kernel: hub 1-0:1.0: USB hub found Oct 9 01:04:50.899872 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 46 Oct 9 01:04:50.899886 kernel: hub 1-0:1.0: 4 ports detected Oct 9 01:04:50.900053 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 46 Oct 9 01:04:50.900064 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 46 Oct 9 01:04:50.900073 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 9 01:04:50.900212 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 46 Oct 9 01:04:50.900221 kernel: hub 2-0:1.0: USB hub found Oct 9 01:04:50.900357 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 46 Oct 9 01:04:50.900371 kernel: hub 2-0:1.0: 4 ports detected Oct 9 01:04:50.925286 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (449) Oct 9 01:04:50.932020 kernel: BTRFS: device fsid 6ed52ce5-b2f8-4d16-8889-677a209bc377 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (455) Oct 9 01:04:50.932493 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 9 01:04:50.934018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:50.940217 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 9 01:04:50.952021 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 01:04:50.956490 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 9 01:04:50.960216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 9 01:04:50.966094 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 9 01:04:50.969100 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 9 01:04:50.974422 disk-uuid[565]: Primary Header is updated. Oct 9 01:04:50.974422 disk-uuid[565]: Secondary Entries is updated. Oct 9 01:04:50.974422 disk-uuid[565]: Secondary Header is updated. Oct 9 01:04:50.980236 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:04:50.987005 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:04:50.989079 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:04:50.994007 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:04:51.148340 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 9 01:04:51.213525 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 9 01:04:51.213590 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 9 01:04:51.213602 kernel: ata1.00: applying bridge limits Oct 9 01:04:51.215501 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 9 01:04:51.215987 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 9 01:04:51.219733 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 9 01:04:51.219758 kernel: ata1.00: configured for UDMA/100 Oct 9 01:04:51.219779 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 9 01:04:51.220219 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 9 01:04:51.220238 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 9 01:04:51.260172 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 9 01:04:51.260490 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 9 01:04:51.270433 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Oct 9 01:04:51.290005 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 9 01:04:51.295475 kernel: usbcore: registered new interface driver usbhid Oct 9 01:04:51.295538 kernel: usbhid: USB HID core driver Oct 9 01:04:51.300330 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 9 01:04:51.300479 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 9 01:04:51.993034 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 9 01:04:51.995025 disk-uuid[566]: The operation has completed successfully. Oct 9 01:04:52.048875 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 9 01:04:52.049021 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 9 01:04:52.062090 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 9 01:04:52.065368 sh[595]: Success Oct 9 01:04:52.077002 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Oct 9 01:04:52.117578 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 9 01:04:52.130082 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 9 01:04:52.132180 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 9 01:04:52.145324 kernel: BTRFS info (device dm-0): first mount of filesystem 6ed52ce5-b2f8-4d16-8889-677a209bc377 Oct 9 01:04:52.145356 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:04:52.148006 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 9 01:04:52.148032 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 9 01:04:52.149248 kernel: BTRFS info (device dm-0): using free space tree Oct 9 01:04:52.157996 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 9 01:04:52.159208 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 9 01:04:52.160262 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 9 01:04:52.169117 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 9 01:04:52.171117 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 9 01:04:52.180998 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:04:52.181025 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:04:52.183576 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:04:52.187748 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:04:52.187770 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:04:52.198354 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 9 01:04:52.199084 kernel: BTRFS info (device sda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:04:52.204051 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 9 01:04:52.209179 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 9 01:04:52.276241 ignition[696]: Ignition 2.19.0 Oct 9 01:04:52.276253 ignition[696]: Stage: fetch-offline Oct 9 01:04:52.278271 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:04:52.276287 ignition[696]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:04:52.276296 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:04:52.276386 ignition[696]: parsed url from cmdline: "" Oct 9 01:04:52.276390 ignition[696]: no config URL provided Oct 9 01:04:52.276395 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:04:52.281956 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:04:52.276403 ignition[696]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:04:52.276407 ignition[696]: failed to fetch config: resource requires networking Oct 9 01:04:52.276582 ignition[696]: Ignition finished successfully Oct 9 01:04:52.289124 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:04:52.307686 systemd-networkd[782]: lo: Link UP Oct 9 01:04:52.307694 systemd-networkd[782]: lo: Gained carrier Oct 9 01:04:52.310042 systemd-networkd[782]: Enumeration completed Oct 9 01:04:52.310354 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:04:52.311300 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:52.311306 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:04:52.311944 systemd[1]: Reached target network.target - Network. Oct 9 01:04:52.312664 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:52.312668 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:04:52.314207 systemd-networkd[782]: eth0: Link UP Oct 9 01:04:52.314211 systemd-networkd[782]: eth0: Gained carrier Oct 9 01:04:52.314220 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:52.319190 systemd-networkd[782]: eth1: Link UP Oct 9 01:04:52.319196 systemd-networkd[782]: eth1: Gained carrier Oct 9 01:04:52.319202 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:52.325095 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 9 01:04:52.336106 ignition[784]: Ignition 2.19.0 Oct 9 01:04:52.336119 ignition[784]: Stage: fetch Oct 9 01:04:52.336252 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:04:52.336266 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:04:52.336361 ignition[784]: parsed url from cmdline: "" Oct 9 01:04:52.336364 ignition[784]: no config URL provided Oct 9 01:04:52.336369 ignition[784]: reading system config file "/usr/lib/ignition/user.ign" Oct 9 01:04:52.336378 ignition[784]: no config at "/usr/lib/ignition/user.ign" Oct 9 01:04:52.336406 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 9 01:04:52.336556 ignition[784]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 9 01:04:52.352028 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:04:52.460054 systemd-networkd[782]: eth0: DHCPv4 address 49.13.59.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 01:04:52.537014 ignition[784]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 9 01:04:52.541484 ignition[784]: GET result: OK Oct 9 01:04:52.541553 ignition[784]: parsing config with SHA512: 16df8a80eff08c910d725302022b1a26e021b2211ad910aa14e81da72a23984205f3255f0f0ffa77dc081a1a5a7008ede22073300662c2fc33c555b3896921bc Oct 9 01:04:52.545872 unknown[784]: fetched base config from "system" Oct 9 01:04:52.545883 unknown[784]: fetched base config from "system" Oct 9 01:04:52.546352 ignition[784]: fetch: fetch complete Oct 9 01:04:52.545890 unknown[784]: fetched user config from "hetzner" Oct 9 01:04:52.546358 ignition[784]: fetch: fetch passed Oct 9 01:04:52.549382 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 9 01:04:52.546403 ignition[784]: Ignition finished successfully Oct 9 01:04:52.557132 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 9 01:04:52.570399 ignition[792]: Ignition 2.19.0 Oct 9 01:04:52.570416 ignition[792]: Stage: kargs Oct 9 01:04:52.570620 ignition[792]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:04:52.572930 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 9 01:04:52.570633 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:04:52.571455 ignition[792]: kargs: kargs passed Oct 9 01:04:52.571504 ignition[792]: Ignition finished successfully Oct 9 01:04:52.582230 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 9 01:04:52.593262 ignition[798]: Ignition 2.19.0 Oct 9 01:04:52.593945 ignition[798]: Stage: disks Oct 9 01:04:52.594137 ignition[798]: no configs at "/usr/lib/ignition/base.d" Oct 9 01:04:52.594179 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:04:52.594929 ignition[798]: disks: disks passed Oct 9 01:04:52.596311 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 9 01:04:52.594966 ignition[798]: Ignition finished successfully Oct 9 01:04:52.597556 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 9 01:04:52.598482 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:04:52.599472 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:04:52.600531 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:04:52.601646 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:04:52.607133 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 9 01:04:52.620129 systemd-fsck[806]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 9 01:04:52.622852 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 9 01:04:52.628121 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 9 01:04:52.707009 kernel: EXT4-fs (sda9): mounted filesystem ba2945c1-be14-41c0-8c54-84d676c7a16b r/w with ordered data mode. Quota mode: none. Oct 9 01:04:52.707218 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 9 01:04:52.708189 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 9 01:04:52.713053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:04:52.714799 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 9 01:04:52.719109 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 9 01:04:52.721966 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 9 01:04:52.723097 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:04:52.724733 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 9 01:04:52.729425 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (814) Oct 9 01:04:52.729455 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:04:52.731166 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:04:52.733322 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:04:52.733485 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 9 01:04:52.739915 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:04:52.739943 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:04:52.743334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:04:52.784588 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Oct 9 01:04:52.785954 coreos-metadata[816]: Oct 09 01:04:52.785 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 9 01:04:52.788014 coreos-metadata[816]: Oct 09 01:04:52.787 INFO Fetch successful Oct 9 01:04:52.788014 coreos-metadata[816]: Oct 09 01:04:52.787 INFO wrote hostname ci-4116-0-0-f-4ef11beaf3 to /sysroot/etc/hostname Oct 9 01:04:52.791486 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Oct 9 01:04:52.790564 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:04:52.796223 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Oct 9 01:04:52.800102 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Oct 9 01:04:52.879858 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 9 01:04:52.887095 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 9 01:04:52.890108 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 9 01:04:52.898034 kernel: BTRFS info (device sda6): last unmount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:04:52.914530 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 9 01:04:52.917083 ignition[936]: INFO : Ignition 2.19.0 Oct 9 01:04:52.917083 ignition[936]: INFO : Stage: mount Oct 9 01:04:52.918126 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:04:52.918126 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:04:52.918126 ignition[936]: INFO : mount: mount passed Oct 9 01:04:52.918126 ignition[936]: INFO : Ignition finished successfully Oct 9 01:04:52.919009 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 9 01:04:52.928091 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 9 01:04:53.145805 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 9 01:04:53.153233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 9 01:04:53.169011 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (948) Oct 9 01:04:53.172687 kernel: BTRFS info (device sda6): first mount of filesystem 7abc21fd-6b75-4be0-8205-dc564a91a608 Oct 9 01:04:53.172750 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 9 01:04:53.176014 kernel: BTRFS info (device sda6): using free space tree Oct 9 01:04:53.182951 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 9 01:04:53.183018 kernel: BTRFS info (device sda6): auto enabling async discard Oct 9 01:04:53.187551 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 9 01:04:53.230487 ignition[965]: INFO : Ignition 2.19.0 Oct 9 01:04:53.230487 ignition[965]: INFO : Stage: files Oct 9 01:04:53.232971 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:04:53.232971 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:04:53.232971 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Oct 9 01:04:53.237035 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 9 01:04:53.237035 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 9 01:04:53.239881 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 9 01:04:53.239881 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 9 01:04:53.242584 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 9 01:04:53.242584 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 01:04:53.239924 unknown[965]: wrote ssh authorized keys file for user: core Oct 9 01:04:53.246791 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Oct 9 01:04:53.246791 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:04:53.246791 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Oct 9 01:04:53.390837 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 9 01:04:53.600183 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 9 01:04:53.602238 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:04:53.613474 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 9 01:04:53.613474 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:04:53.613474 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:04:53.613474 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:04:53.613474 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Oct 9 01:04:53.771227 systemd-networkd[782]: eth0: Gained IPv6LL Oct 9 01:04:54.173210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 9 01:04:54.283215 systemd-networkd[782]: eth1: Gained IPv6LL Oct 9 01:04:54.418342 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Oct 9 01:04:54.418342 ignition[965]: INFO : files: op(c): [started] processing unit "containerd.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(c): [finished] processing unit "containerd.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(10): op(11): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(10): op(11): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 9 01:04:54.420239 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:04:54.422403 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 9 01:04:54.432352 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 9 01:04:54.432352 ignition[965]: INFO : files: files passed Oct 9 01:04:54.432352 ignition[965]: INFO : Ignition finished successfully Oct 9 01:04:54.432653 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 9 01:04:54.435125 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 9 01:04:54.436399 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 9 01:04:54.436514 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 9 01:04:54.448097 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:04:54.448928 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:04:54.450667 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 9 01:04:54.452380 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:04:54.453499 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 9 01:04:54.457151 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 9 01:04:54.488639 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 9 01:04:54.488764 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 9 01:04:54.489940 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 9 01:04:54.490961 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 9 01:04:54.492011 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 9 01:04:54.499182 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 9 01:04:54.511527 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:04:54.518122 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 9 01:04:54.526039 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:04:54.526654 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:04:54.527721 systemd[1]: Stopped target timers.target - Timer Units. Oct 9 01:04:54.528727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 9 01:04:54.528863 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 9 01:04:54.530081 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 9 01:04:54.530725 systemd[1]: Stopped target basic.target - Basic System. Oct 9 01:04:54.531832 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 9 01:04:54.532827 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 9 01:04:54.533834 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 9 01:04:54.534932 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 9 01:04:54.535964 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 9 01:04:54.537127 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 9 01:04:54.538133 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 9 01:04:54.539206 systemd[1]: Stopped target swap.target - Swaps. Oct 9 01:04:54.540180 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 9 01:04:54.540317 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 9 01:04:54.541381 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:04:54.542218 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:04:54.543186 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 9 01:04:54.545084 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:04:54.545704 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 9 01:04:54.545800 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 9 01:04:54.547395 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 9 01:04:54.547541 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 9 01:04:54.548698 systemd[1]: ignition-files.service: Deactivated successfully. Oct 9 01:04:54.548789 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 9 01:04:54.549738 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 9 01:04:54.549829 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 9 01:04:54.562446 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 9 01:04:54.565163 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 9 01:04:54.565633 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 9 01:04:54.565779 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:04:54.567834 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 9 01:04:54.569036 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 9 01:04:54.578540 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 9 01:04:54.579027 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 9 01:04:54.582477 ignition[1018]: INFO : Ignition 2.19.0 Oct 9 01:04:54.584134 ignition[1018]: INFO : Stage: umount Oct 9 01:04:54.584134 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 9 01:04:54.584134 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 9 01:04:54.584134 ignition[1018]: INFO : umount: umount passed Oct 9 01:04:54.584134 ignition[1018]: INFO : Ignition finished successfully Oct 9 01:04:54.587240 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 9 01:04:54.587358 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 9 01:04:54.589327 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 9 01:04:54.589401 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 9 01:04:54.591204 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 9 01:04:54.591253 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 9 01:04:54.591696 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 9 01:04:54.591736 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 9 01:04:54.592308 systemd[1]: Stopped target network.target - Network. Oct 9 01:04:54.592691 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 9 01:04:54.592737 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 9 01:04:54.595109 systemd[1]: Stopped target paths.target - Path Units. Oct 9 01:04:54.595705 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 9 01:04:54.600769 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:04:54.602396 systemd[1]: Stopped target slices.target - Slice Units. Oct 9 01:04:54.604293 systemd[1]: Stopped target sockets.target - Socket Units. Oct 9 01:04:54.604714 systemd[1]: iscsid.socket: Deactivated successfully. Oct 9 01:04:54.604755 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 9 01:04:54.605251 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 9 01:04:54.605290 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 9 01:04:54.613125 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 9 01:04:54.613174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 9 01:04:54.613636 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 9 01:04:54.613679 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 9 01:04:54.616119 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 9 01:04:54.616997 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 9 01:04:54.621717 systemd-networkd[782]: eth0: DHCPv6 lease lost Oct 9 01:04:54.621850 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 9 01:04:54.622784 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 9 01:04:54.622915 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 9 01:04:54.623926 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 9 01:04:54.624031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 9 01:04:54.625035 systemd-networkd[782]: eth1: DHCPv6 lease lost Oct 9 01:04:54.626289 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 9 01:04:54.626405 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 9 01:04:54.627743 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 9 01:04:54.627809 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:04:54.636121 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 9 01:04:54.636616 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 9 01:04:54.636675 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 9 01:04:54.637951 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:04:54.641244 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 9 01:04:54.641357 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 9 01:04:54.650334 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 9 01:04:54.650418 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:04:54.650949 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 9 01:04:54.651512 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 9 01:04:54.652128 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 9 01:04:54.652173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:04:54.653499 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 9 01:04:54.653609 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 9 01:04:54.655505 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 9 01:04:54.655688 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:04:54.657339 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 9 01:04:54.657395 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 9 01:04:54.658096 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 9 01:04:54.658132 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:04:54.659009 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 9 01:04:54.659056 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 9 01:04:54.660583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 9 01:04:54.660627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 9 01:04:54.661640 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 9 01:04:54.661687 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 9 01:04:54.670159 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 9 01:04:54.671360 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 9 01:04:54.671452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:04:54.672017 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 9 01:04:54.672083 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:04:54.672645 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 9 01:04:54.672707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:04:54.675084 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:04:54.675147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:54.676886 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 9 01:04:54.677058 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 9 01:04:54.678552 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 9 01:04:54.685419 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 9 01:04:54.693333 systemd[1]: Switching root. Oct 9 01:04:54.720161 systemd-journald[187]: Journal stopped Oct 9 01:04:55.679882 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Oct 9 01:04:55.679943 kernel: SELinux: policy capability network_peer_controls=1 Oct 9 01:04:55.679959 kernel: SELinux: policy capability open_perms=1 Oct 9 01:04:55.680016 kernel: SELinux: policy capability extended_socket_class=1 Oct 9 01:04:55.680034 kernel: SELinux: policy capability always_check_network=0 Oct 9 01:04:55.680044 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 9 01:04:55.680054 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 9 01:04:55.680063 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 9 01:04:55.680077 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 9 01:04:55.680090 kernel: audit: type=1403 audit(1728435894.893:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 9 01:04:55.680101 systemd[1]: Successfully loaded SELinux policy in 46.977ms. Oct 9 01:04:55.680120 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.997ms. Oct 9 01:04:55.680131 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 9 01:04:55.680141 systemd[1]: Detected virtualization kvm. Oct 9 01:04:55.680152 systemd[1]: Detected architecture x86-64. Oct 9 01:04:55.680162 systemd[1]: Detected first boot. Oct 9 01:04:55.680172 systemd[1]: Hostname set to . Oct 9 01:04:55.680182 systemd[1]: Initializing machine ID from VM UUID. Oct 9 01:04:55.680192 zram_generator::config[1078]: No configuration found. Oct 9 01:04:55.680206 systemd[1]: Populated /etc with preset unit settings. Oct 9 01:04:55.680257 systemd[1]: Queued start job for default target multi-user.target. Oct 9 01:04:55.680269 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 9 01:04:55.680281 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 9 01:04:55.680291 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 9 01:04:55.680301 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 9 01:04:55.680310 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 9 01:04:55.680320 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 9 01:04:55.680334 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 9 01:04:55.681621 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 9 01:04:55.681634 systemd[1]: Created slice user.slice - User and Session Slice. Oct 9 01:04:55.681645 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 9 01:04:55.681656 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 9 01:04:55.681666 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 9 01:04:55.681676 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 9 01:04:55.681692 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 9 01:04:55.681703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 9 01:04:55.681715 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 9 01:04:55.681726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 9 01:04:55.681736 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 9 01:04:55.681745 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 9 01:04:55.681756 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 9 01:04:55.681767 systemd[1]: Reached target slices.target - Slice Units. Oct 9 01:04:55.681777 systemd[1]: Reached target swap.target - Swaps. Oct 9 01:04:55.681789 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 9 01:04:55.681799 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 9 01:04:55.681809 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 9 01:04:55.681819 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 9 01:04:55.681829 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 9 01:04:55.681842 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 9 01:04:55.681856 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 9 01:04:55.681867 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 9 01:04:55.681877 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 9 01:04:55.681887 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 9 01:04:55.681897 systemd[1]: Mounting media.mount - External Media Directory... Oct 9 01:04:55.681907 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:55.681917 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 9 01:04:55.681927 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 9 01:04:55.681937 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 9 01:04:55.681949 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 9 01:04:55.681959 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:04:55.681970 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 9 01:04:55.682022 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 9 01:04:55.682033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:04:55.682043 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:04:55.682053 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:04:55.682063 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 9 01:04:55.682076 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:04:55.682087 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:04:55.682097 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Oct 9 01:04:55.682108 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Oct 9 01:04:55.682118 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 9 01:04:55.682165 kernel: fuse: init (API version 7.39) Oct 9 01:04:55.682177 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 9 01:04:55.682210 systemd-journald[1184]: Collecting audit messages is disabled. Oct 9 01:04:55.682234 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 9 01:04:55.682245 kernel: ACPI: bus type drm_connector registered Oct 9 01:04:55.682255 systemd-journald[1184]: Journal started Oct 9 01:04:55.682274 systemd-journald[1184]: Runtime Journal (/run/log/journal/3a9ac8a47870457db9631a0644e1baf9) is 4.8M, max 38.4M, 33.6M free. Oct 9 01:04:55.688101 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 9 01:04:55.697171 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 9 01:04:55.704022 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:55.704068 kernel: loop: module loaded Oct 9 01:04:55.714007 systemd[1]: Started systemd-journald.service - Journal Service. Oct 9 01:04:55.714680 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 9 01:04:55.715543 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 9 01:04:55.716335 systemd[1]: Mounted media.mount - External Media Directory. Oct 9 01:04:55.717083 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 9 01:04:55.717813 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 9 01:04:55.718561 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 9 01:04:55.719374 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 9 01:04:55.720213 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 9 01:04:55.721042 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 9 01:04:55.721228 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 9 01:04:55.722080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:04:55.722367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:04:55.723583 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:04:55.723816 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:04:55.724581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:04:55.724758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:04:55.725652 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 9 01:04:55.725882 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 9 01:04:55.726670 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:04:55.726922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:04:55.727950 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 9 01:04:55.728771 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 9 01:04:55.729789 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 9 01:04:55.743424 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 9 01:04:55.749077 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 9 01:04:55.753882 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 9 01:04:55.754419 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:04:55.758216 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 9 01:04:55.770215 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 9 01:04:55.770956 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:04:55.781126 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 9 01:04:55.782658 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:04:55.785634 systemd-journald[1184]: Time spent on flushing to /var/log/journal/3a9ac8a47870457db9631a0644e1baf9 is 26.498ms for 1120 entries. Oct 9 01:04:55.785634 systemd-journald[1184]: System Journal (/var/log/journal/3a9ac8a47870457db9631a0644e1baf9) is 8.0M, max 584.8M, 576.8M free. Oct 9 01:04:55.823046 systemd-journald[1184]: Received client request to flush runtime journal. Oct 9 01:04:55.793109 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 9 01:04:55.805673 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 9 01:04:55.815420 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 9 01:04:55.817117 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 9 01:04:55.817899 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 9 01:04:55.821369 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 9 01:04:55.827951 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 9 01:04:55.853470 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 9 01:04:55.864166 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Oct 9 01:04:55.864498 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Oct 9 01:04:55.872337 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 9 01:04:55.882097 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 9 01:04:55.883936 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 9 01:04:55.894170 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 9 01:04:55.906795 udevadm[1239]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 9 01:04:55.917795 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 9 01:04:55.928276 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 9 01:04:55.943337 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Oct 9 01:04:55.943647 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Oct 9 01:04:55.950203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 9 01:04:56.232576 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 9 01:04:56.238155 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 9 01:04:56.261924 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Oct 9 01:04:56.282763 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 9 01:04:56.291175 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 9 01:04:56.308157 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 9 01:04:56.337803 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Oct 9 01:04:56.347891 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 9 01:04:56.364112 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1264) Oct 9 01:04:56.374029 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1264) Oct 9 01:04:56.432167 systemd-networkd[1262]: lo: Link UP Oct 9 01:04:56.432176 systemd-networkd[1262]: lo: Gained carrier Oct 9 01:04:56.436374 systemd-networkd[1262]: Enumeration completed Oct 9 01:04:56.437494 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 9 01:04:56.440046 systemd-networkd[1262]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:56.442472 systemd-networkd[1262]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:04:56.443338 systemd-networkd[1262]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:56.443543 systemd-networkd[1262]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 9 01:04:56.445636 systemd-networkd[1262]: eth0: Link UP Oct 9 01:04:56.446040 systemd-networkd[1262]: eth0: Gained carrier Oct 9 01:04:56.446191 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 9 01:04:56.446266 systemd-networkd[1262]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:56.459002 kernel: mousedev: PS/2 mouse device common for all mice Oct 9 01:04:56.459187 systemd-networkd[1262]: eth1: Link UP Oct 9 01:04:56.459198 systemd-networkd[1262]: eth1: Gained carrier Oct 9 01:04:56.459235 systemd-networkd[1262]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:56.464492 systemd-networkd[1262]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:56.470918 systemd-networkd[1262]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 9 01:04:56.474023 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 9 01:04:56.479013 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1271) Oct 9 01:04:56.481732 kernel: ACPI: button: Power Button [PWRF] Oct 9 01:04:56.481114 systemd-networkd[1262]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 9 01:04:56.531093 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Oct 9 01:04:56.531116 systemd[1]: Condition check resulted in dev-vport2p1.device - /dev/vport2p1 being skipped. Oct 9 01:04:56.531160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:56.531281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:04:56.541730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:04:56.552148 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 9 01:04:56.556733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:04:56.568020 kernel: EDAC MC: Ver: 3.0.0 Oct 9 01:04:56.567078 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:04:56.568850 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 9 01:04:56.568885 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 9 01:04:56.568920 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:56.573844 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 9 01:04:56.574098 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 9 01:04:56.574471 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 9 01:04:56.573170 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:04:56.573359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:04:56.575053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:04:56.575236 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:04:56.577170 systemd-networkd[1262]: eth0: DHCPv4 address 49.13.59.7/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 9 01:04:56.578921 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:04:56.580314 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:04:56.580713 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:04:56.582544 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:04:56.591052 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Oct 9 01:04:56.591085 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Oct 9 01:04:56.594027 kernel: Console: switching to colour dummy device 80x25 Oct 9 01:04:56.595215 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 9 01:04:56.595253 kernel: [drm] features: -context_init Oct 9 01:04:56.596148 kernel: [drm] number of scanouts: 1 Oct 9 01:04:56.596171 kernel: [drm] number of cap sets: 0 Oct 9 01:04:56.605940 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 9 01:04:56.605831 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 9 01:04:56.613008 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Oct 9 01:04:56.613040 kernel: Console: switching to colour frame buffer device 160x50 Oct 9 01:04:56.623256 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:56.626646 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 9 01:04:56.637732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 9 01:04:56.638040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:56.649211 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 9 01:04:56.697389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 9 01:04:56.766367 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 9 01:04:56.771234 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 9 01:04:56.785143 lvm[1325]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:04:56.817717 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 9 01:04:56.817959 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 9 01:04:56.824234 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 9 01:04:56.828609 lvm[1328]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 9 01:04:56.855848 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 9 01:04:56.856878 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 9 01:04:56.858186 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 9 01:04:56.858211 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 9 01:04:56.858288 systemd[1]: Reached target machines.target - Containers. Oct 9 01:04:56.859267 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 9 01:04:56.863155 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 9 01:04:56.865643 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 9 01:04:56.866944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:04:56.872416 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 9 01:04:56.878885 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 9 01:04:56.887901 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 9 01:04:56.890860 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 9 01:04:56.907001 kernel: loop0: detected capacity change from 0 to 211296 Oct 9 01:04:56.904332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 9 01:04:56.916343 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 9 01:04:56.917646 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 9 01:04:56.936251 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 9 01:04:56.958012 kernel: loop1: detected capacity change from 0 to 138192 Oct 9 01:04:56.992008 kernel: loop2: detected capacity change from 0 to 8 Oct 9 01:04:57.013925 kernel: loop3: detected capacity change from 0 to 140992 Oct 9 01:04:57.053004 kernel: loop4: detected capacity change from 0 to 211296 Oct 9 01:04:57.073021 kernel: loop5: detected capacity change from 0 to 138192 Oct 9 01:04:57.089091 kernel: loop6: detected capacity change from 0 to 8 Oct 9 01:04:57.092091 kernel: loop7: detected capacity change from 0 to 140992 Oct 9 01:04:57.108140 (sd-merge)[1349]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 9 01:04:57.108702 (sd-merge)[1349]: Merged extensions into '/usr'. Oct 9 01:04:57.114809 systemd[1]: Reloading requested from client PID 1336 ('systemd-sysext') (unit systemd-sysext.service)... Oct 9 01:04:57.114822 systemd[1]: Reloading... Oct 9 01:04:57.194015 zram_generator::config[1386]: No configuration found. Oct 9 01:04:57.256827 ldconfig[1332]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 9 01:04:57.307808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:04:57.364511 systemd[1]: Reloading finished in 249 ms. Oct 9 01:04:57.381740 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 9 01:04:57.384920 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 9 01:04:57.395099 systemd[1]: Starting ensure-sysext.service... Oct 9 01:04:57.399101 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 9 01:04:57.403586 systemd[1]: Reloading requested from client PID 1427 ('systemctl') (unit ensure-sysext.service)... Oct 9 01:04:57.403676 systemd[1]: Reloading... Oct 9 01:04:57.422445 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 9 01:04:57.422872 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 9 01:04:57.424442 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 9 01:04:57.424770 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Oct 9 01:04:57.424878 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Oct 9 01:04:57.430173 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:04:57.430182 systemd-tmpfiles[1428]: Skipping /boot Oct 9 01:04:57.451038 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Oct 9 01:04:57.451050 systemd-tmpfiles[1428]: Skipping /boot Oct 9 01:04:57.490053 zram_generator::config[1470]: No configuration found. Oct 9 01:04:57.581849 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:04:57.637375 systemd[1]: Reloading finished in 233 ms. Oct 9 01:04:57.656829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 9 01:04:57.670147 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:04:57.676096 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 9 01:04:57.681238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 9 01:04:57.689952 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 9 01:04:57.695336 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 9 01:04:57.704608 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:57.704765 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:04:57.708171 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:04:57.718234 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:04:57.724037 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:04:57.724733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:04:57.724883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:57.734266 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:04:57.734457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:04:57.740626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:04:57.753128 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:04:57.758827 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:04:57.760108 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:04:57.766173 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 9 01:04:57.782637 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 9 01:04:57.791127 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:57.792033 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 9 01:04:57.798363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 9 01:04:57.806406 augenrules[1549]: No rules Oct 9 01:04:57.808716 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 9 01:04:57.812186 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 9 01:04:57.825133 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 9 01:04:57.825738 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 9 01:04:57.830749 systemd-resolved[1512]: Positive Trust Anchors: Oct 9 01:04:57.830765 systemd-resolved[1512]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 9 01:04:57.830790 systemd-resolved[1512]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 9 01:04:57.833118 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 9 01:04:57.833686 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 9 01:04:57.834594 systemd[1]: Finished ensure-sysext.service. Oct 9 01:04:57.840548 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:04:57.840813 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:04:57.841596 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 9 01:04:57.841843 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 9 01:04:57.842773 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 9 01:04:57.842963 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 9 01:04:57.843945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 9 01:04:57.844142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 9 01:04:57.844802 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 9 01:04:57.845116 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 9 01:04:57.849605 systemd-resolved[1512]: Using system hostname 'ci-4116-0-0-f-4ef11beaf3'. Oct 9 01:04:57.853643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 9 01:04:57.857940 systemd[1]: Reached target network.target - Network. Oct 9 01:04:57.859612 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 9 01:04:57.860138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 9 01:04:57.860205 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 9 01:04:57.867129 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 9 01:04:57.869722 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 9 01:04:57.882027 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 9 01:04:57.883325 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 9 01:04:57.929044 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 9 01:04:57.931120 systemd[1]: Reached target sysinit.target - System Initialization. Oct 9 01:04:57.931736 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 9 01:04:57.934685 systemd-networkd[1262]: eth1: Gained IPv6LL Oct 9 01:04:57.935110 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 9 01:04:57.937556 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 9 01:04:57.938001 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 9 01:04:57.938032 systemd[1]: Reached target paths.target - Path Units. Oct 9 01:04:57.938442 systemd[1]: Reached target time-set.target - System Time Set. Oct 9 01:04:57.939339 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 9 01:04:57.941681 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 9 01:04:57.942107 systemd[1]: Reached target timers.target - Timer Units. Oct 9 01:04:57.945641 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 9 01:04:57.948394 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 9 01:04:57.951723 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 9 01:04:57.955150 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 9 01:04:57.955780 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 9 01:04:57.956286 systemd-timesyncd[1570]: Contacted time server 195.201.19.162:123 (0.flatcar.pool.ntp.org). Oct 9 01:04:57.956330 systemd-timesyncd[1570]: Initial clock synchronization to Wed 2024-10-09 01:04:58.230974 UTC. Oct 9 01:04:57.958804 systemd[1]: Reached target network-online.target - Network is Online. Oct 9 01:04:57.959566 systemd[1]: Reached target sockets.target - Socket Units. Oct 9 01:04:57.960180 systemd[1]: Reached target basic.target - Basic System. Oct 9 01:04:57.960909 systemd[1]: System is tainted: cgroupsv1 Oct 9 01:04:57.961112 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:04:57.961149 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 9 01:04:57.971101 systemd[1]: Starting containerd.service - containerd container runtime... Oct 9 01:04:57.976134 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 9 01:04:57.983124 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 9 01:04:57.987099 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 9 01:04:57.997154 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 9 01:04:57.999294 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 9 01:04:58.011375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:04:58.020627 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 9 01:04:58.027345 jq[1584]: false Oct 9 01:04:58.037086 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 9 01:04:58.041108 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 9 01:04:58.049593 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Oct 9 01:04:58.053611 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 9 01:04:58.057336 coreos-metadata[1582]: Oct 09 01:04:58.057 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 9 01:04:58.060763 systemd-networkd[1262]: eth0: Gained IPv6LL Oct 9 01:04:58.061923 coreos-metadata[1582]: Oct 09 01:04:58.061 INFO Fetch successful Oct 9 01:04:58.061923 coreos-metadata[1582]: Oct 09 01:04:58.061 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 9 01:04:58.067871 dbus-daemon[1583]: [system] SELinux support is enabled Oct 9 01:04:58.072946 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 9 01:04:58.074423 coreos-metadata[1582]: Oct 09 01:04:58.071 INFO Fetch successful Oct 9 01:04:58.081392 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 9 01:04:58.083958 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 9 01:04:58.091483 systemd[1]: Starting update-engine.service - Update Engine... Oct 9 01:04:58.108098 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 9 01:04:58.112566 jq[1614]: true Oct 9 01:04:58.115244 extend-filesystems[1587]: Found loop4 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found loop5 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found loop6 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found loop7 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda1 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda2 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda3 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found usr Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda4 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda6 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda7 Oct 9 01:04:58.115244 extend-filesystems[1587]: Found sda9 Oct 9 01:04:58.115244 extend-filesystems[1587]: Checking size of /dev/sda9 Oct 9 01:04:58.116226 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 9 01:04:58.135760 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 9 01:04:58.169699 update_engine[1611]: I20241009 01:04:58.168108 1611 main.cc:92] Flatcar Update Engine starting Oct 9 01:04:58.136107 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 9 01:04:58.183372 update_engine[1611]: I20241009 01:04:58.172756 1611 update_check_scheduler.cc:74] Next update check in 8m25s Oct 9 01:04:58.140543 systemd[1]: motdgen.service: Deactivated successfully. Oct 9 01:04:58.140823 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 9 01:04:58.174863 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 9 01:04:58.184433 extend-filesystems[1587]: Resized partition /dev/sda9 Oct 9 01:04:58.191744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 9 01:04:58.193344 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 9 01:04:58.212715 systemd-logind[1608]: New seat seat0. Oct 9 01:04:58.215159 extend-filesystems[1634]: resize2fs 1.47.1 (20-May-2024) Oct 9 01:04:58.233096 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 9 01:04:58.233130 jq[1633]: true Oct 9 01:04:58.223585 systemd-logind[1608]: Watching system buttons on /dev/input/event2 (Power Button) Oct 9 01:04:58.223605 systemd-logind[1608]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 9 01:04:58.233797 (ntainerd)[1635]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 9 01:04:58.236262 systemd[1]: Started systemd-logind.service - User Login Management. Oct 9 01:04:58.289660 tar[1630]: linux-amd64/helm Oct 9 01:04:58.290703 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 9 01:04:58.295462 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 9 01:04:58.295565 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 9 01:04:58.295606 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 9 01:04:58.296696 dbus-daemon[1583]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 9 01:04:58.300827 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 9 01:04:58.300858 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 9 01:04:58.301527 systemd[1]: Started update-engine.service - Update Engine. Oct 9 01:04:58.308388 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 9 01:04:58.314399 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 9 01:04:58.375771 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1274) Oct 9 01:04:58.423337 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 9 01:04:58.456140 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:04:58.424702 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 9 01:04:58.444436 systemd[1]: Starting sshkeys.service... Oct 9 01:04:58.465903 extend-filesystems[1634]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 9 01:04:58.465903 extend-filesystems[1634]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 9 01:04:58.465903 extend-filesystems[1634]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 9 01:04:58.485587 extend-filesystems[1587]: Resized filesystem in /dev/sda9 Oct 9 01:04:58.485587 extend-filesystems[1587]: Found sr0 Oct 9 01:04:58.467868 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 9 01:04:58.468405 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 9 01:04:58.502182 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 9 01:04:58.510590 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 9 01:04:58.586505 coreos-metadata[1691]: Oct 09 01:04:58.586 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 9 01:04:58.588986 locksmithd[1674]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 9 01:04:58.589635 coreos-metadata[1691]: Oct 09 01:04:58.588 INFO Fetch successful Oct 9 01:04:58.592713 unknown[1691]: wrote ssh authorized keys file for user: core Oct 9 01:04:58.627902 update-ssh-keys[1698]: Updated "/home/core/.ssh/authorized_keys" Oct 9 01:04:58.628662 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 9 01:04:58.643989 containerd[1635]: time="2024-10-09T01:04:58.642808143Z" level=info msg="starting containerd" revision=b2ce781edcbd6cb758f172ecab61c79d607cc41d version=v1.7.22 Oct 9 01:04:58.646711 systemd[1]: Finished sshkeys.service. Oct 9 01:04:58.666909 sshd_keygen[1621]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 9 01:04:58.687510 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 9 01:04:58.704472 containerd[1635]: time="2024-10-09T01:04:58.703707102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:04:58.704615 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 9 01:04:58.707017 containerd[1635]: time="2024-10-09T01:04:58.706896902Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:04:58.707017 containerd[1635]: time="2024-10-09T01:04:58.706923566Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 9 01:04:58.707017 containerd[1635]: time="2024-10-09T01:04:58.706938583Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707133395Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707154442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707225728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707238506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707448272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707462220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707474065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707482253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707576140Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707790777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708183 containerd[1635]: time="2024-10-09T01:04:58.707934583Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 9 01:04:58.708428 containerd[1635]: time="2024-10-09T01:04:58.707946438Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 9 01:04:58.708428 containerd[1635]: time="2024-10-09T01:04:58.708060731Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 9 01:04:58.708428 containerd[1635]: time="2024-10-09T01:04:58.708120038Z" level=info msg="metadata content store policy set" policy=shared Oct 9 01:04:58.715328 systemd[1]: issuegen.service: Deactivated successfully. Oct 9 01:04:58.715610 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 9 01:04:58.721173 containerd[1635]: time="2024-10-09T01:04:58.721142269Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 9 01:04:58.721252 containerd[1635]: time="2024-10-09T01:04:58.721194239Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 9 01:04:58.721252 containerd[1635]: time="2024-10-09T01:04:58.721210229Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 9 01:04:58.721252 containerd[1635]: time="2024-10-09T01:04:58.721224986Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 9 01:04:58.721252 containerd[1635]: time="2024-10-09T01:04:58.721239484Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 9 01:04:58.723557 containerd[1635]: time="2024-10-09T01:04:58.723486569Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 9 01:04:58.723917 containerd[1635]: time="2024-10-09T01:04:58.723829789Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724279104Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724322307Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724336463Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724348806Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724359779Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724371137Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724382443Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724411822Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724424724Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724439149Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724464 containerd[1635]: time="2024-10-09T01:04:58.724449263Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 9 01:04:58.724349 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 9 01:04:58.724690 containerd[1635]: time="2024-10-09T01:04:58.724466538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.724690 containerd[1635]: time="2024-10-09T01:04:58.724496083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.724690 containerd[1635]: time="2024-10-09T01:04:58.724506871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.724690 containerd[1635]: time="2024-10-09T01:04:58.724518011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.724690 containerd[1635]: time="2024-10-09T01:04:58.724528789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.724690 containerd[1635]: time="2024-10-09T01:04:58.724539286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.724690 containerd[1635]: time="2024-10-09T01:04:58.724548944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725239726Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725259944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725273385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725284897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725295489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725504986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725519194Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725537142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725550034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725577381Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725943078Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.725966363Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 9 01:04:58.727251 containerd[1635]: time="2024-10-09T01:04:58.726112148Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 9 01:04:58.727484 containerd[1635]: time="2024-10-09T01:04:58.726125071Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 9 01:04:58.727484 containerd[1635]: time="2024-10-09T01:04:58.726134533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727484 containerd[1635]: time="2024-10-09T01:04:58.726146606Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 9 01:04:58.727484 containerd[1635]: time="2024-10-09T01:04:58.726155519Z" level=info msg="NRI interface is disabled by configuration." Oct 9 01:04:58.727484 containerd[1635]: time="2024-10-09T01:04:58.726164336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 9 01:04:58.727570 containerd[1635]: time="2024-10-09T01:04:58.727018480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 9 01:04:58.727570 containerd[1635]: time="2024-10-09T01:04:58.727062107Z" level=info msg="Connect containerd service" Oct 9 01:04:58.727570 containerd[1635]: time="2024-10-09T01:04:58.727084480Z" level=info msg="using legacy CRI server" Oct 9 01:04:58.727570 containerd[1635]: time="2024-10-09T01:04:58.727090626Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 9 01:04:58.727570 containerd[1635]: time="2024-10-09T01:04:58.727167840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 9 01:04:58.728763 containerd[1635]: time="2024-10-09T01:04:58.728727687Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 9 01:04:58.729319 containerd[1635]: time="2024-10-09T01:04:58.728828696Z" level=info msg="Start subscribing containerd event" Oct 9 01:04:58.729319 containerd[1635]: time="2024-10-09T01:04:58.728865151Z" level=info msg="Start recovering state" Oct 9 01:04:58.729319 containerd[1635]: time="2024-10-09T01:04:58.728930811Z" level=info msg="Start event monitor" Oct 9 01:04:58.729319 containerd[1635]: time="2024-10-09T01:04:58.728947153Z" level=info msg="Start snapshots syncer" Oct 9 01:04:58.729319 containerd[1635]: time="2024-10-09T01:04:58.728955299Z" level=info msg="Start cni network conf syncer for default" Oct 9 01:04:58.729319 containerd[1635]: time="2024-10-09T01:04:58.728962004Z" level=info msg="Start streaming server" Oct 9 01:04:58.729772 containerd[1635]: time="2024-10-09T01:04:58.729649417Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 9 01:04:58.729772 containerd[1635]: time="2024-10-09T01:04:58.729701066Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 9 01:04:58.729861 containerd[1635]: time="2024-10-09T01:04:58.729842292Z" level=info msg="containerd successfully booted in 0.088527s" Oct 9 01:04:58.732741 systemd[1]: Started containerd.service - containerd container runtime. Oct 9 01:04:58.750549 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 9 01:04:58.758428 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 9 01:04:58.763076 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 9 01:04:58.768443 systemd[1]: Reached target getty.target - Login Prompts. Oct 9 01:04:58.993288 tar[1630]: linux-amd64/LICENSE Oct 9 01:04:58.993288 tar[1630]: linux-amd64/README.md Oct 9 01:04:59.006664 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 9 01:04:59.379255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:04:59.379463 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:04:59.382602 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 9 01:04:59.388851 systemd[1]: Startup finished in 6.441s (kernel) + 4.540s (userspace) = 10.982s. Oct 9 01:05:00.017255 kubelet[1740]: E1009 01:05:00.017161 1740 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:00.020982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:00.021391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:10.272245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 9 01:05:10.279602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:10.419130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:10.423309 (kubelet)[1765]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:10.469409 kubelet[1765]: E1009 01:05:10.469341 1765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:10.474955 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:10.475236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:20.726147 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 9 01:05:20.733266 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:20.905739 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:20.908695 (kubelet)[1787]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:20.950242 kubelet[1787]: E1009 01:05:20.950180 1787 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:20.954031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:20.954272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:31.205158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 9 01:05:31.212258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:31.361150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:31.364957 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:31.407502 kubelet[1809]: E1009 01:05:31.407444 1809 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:31.411866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:31.412567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:41.662554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 9 01:05:41.669140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:41.792542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:41.808317 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:41.848075 kubelet[1831]: E1009 01:05:41.848024 1831 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:41.852551 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:41.852788 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:43.446687 update_engine[1611]: I20241009 01:05:43.446604 1611 update_attempter.cc:509] Updating boot flags... Oct 9 01:05:43.500009 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1850) Oct 9 01:05:43.547019 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1849) Oct 9 01:05:51.983905 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 9 01:05:51.989350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:05:52.127329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:05:52.129604 (kubelet)[1871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:05:52.169268 kubelet[1871]: E1009 01:05:52.169177 1871 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:05:52.172541 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:05:52.172787 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:05:53.925413 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 9 01:05:53.930208 systemd[1]: Started sshd@0-49.13.59.7:22-139.178.68.195:33002.service - OpenSSH per-connection server daemon (139.178.68.195:33002). Oct 9 01:05:54.939019 sshd[1880]: Accepted publickey for core from 139.178.68.195 port 33002 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:05:54.941890 sshd[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:54.950169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 9 01:05:54.961179 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 9 01:05:54.964114 systemd-logind[1608]: New session 1 of user core. Oct 9 01:05:54.974882 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 9 01:05:54.983191 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 9 01:05:54.986609 (systemd)[1886]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 9 01:05:55.082779 systemd[1886]: Queued start job for default target default.target. Oct 9 01:05:55.083182 systemd[1886]: Created slice app.slice - User Application Slice. Oct 9 01:05:55.083204 systemd[1886]: Reached target paths.target - Paths. Oct 9 01:05:55.083216 systemd[1886]: Reached target timers.target - Timers. Oct 9 01:05:55.090048 systemd[1886]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 9 01:05:55.097336 systemd[1886]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 9 01:05:55.097387 systemd[1886]: Reached target sockets.target - Sockets. Oct 9 01:05:55.097400 systemd[1886]: Reached target basic.target - Basic System. Oct 9 01:05:55.097437 systemd[1886]: Reached target default.target - Main User Target. Oct 9 01:05:55.097466 systemd[1886]: Startup finished in 104ms. Oct 9 01:05:55.097809 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 9 01:05:55.099913 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 9 01:05:55.795212 systemd[1]: Started sshd@1-49.13.59.7:22-139.178.68.195:33016.service - OpenSSH per-connection server daemon (139.178.68.195:33016). Oct 9 01:05:56.785164 sshd[1898]: Accepted publickey for core from 139.178.68.195 port 33016 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:05:56.787002 sshd[1898]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:56.792073 systemd-logind[1608]: New session 2 of user core. Oct 9 01:05:56.800264 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 9 01:05:57.477373 sshd[1898]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:57.485948 systemd[1]: sshd@1-49.13.59.7:22-139.178.68.195:33016.service: Deactivated successfully. Oct 9 01:05:57.491448 systemd[1]: session-2.scope: Deactivated successfully. Oct 9 01:05:57.493086 systemd-logind[1608]: Session 2 logged out. Waiting for processes to exit. Oct 9 01:05:57.494868 systemd-logind[1608]: Removed session 2. Oct 9 01:05:57.648632 systemd[1]: Started sshd@2-49.13.59.7:22-139.178.68.195:33026.service - OpenSSH per-connection server daemon (139.178.68.195:33026). Oct 9 01:05:58.645560 sshd[1906]: Accepted publickey for core from 139.178.68.195 port 33026 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:05:58.647751 sshd[1906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:05:58.654830 systemd-logind[1608]: New session 3 of user core. Oct 9 01:05:58.661284 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 9 01:05:59.329017 sshd[1906]: pam_unix(sshd:session): session closed for user core Oct 9 01:05:59.331623 systemd[1]: sshd@2-49.13.59.7:22-139.178.68.195:33026.service: Deactivated successfully. Oct 9 01:05:59.335890 systemd[1]: session-3.scope: Deactivated successfully. Oct 9 01:05:59.336703 systemd-logind[1608]: Session 3 logged out. Waiting for processes to exit. Oct 9 01:05:59.337887 systemd-logind[1608]: Removed session 3. Oct 9 01:05:59.502202 systemd[1]: Started sshd@3-49.13.59.7:22-139.178.68.195:33036.service - OpenSSH per-connection server daemon (139.178.68.195:33036). Oct 9 01:06:00.490478 sshd[1914]: Accepted publickey for core from 139.178.68.195 port 33036 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:06:00.492919 sshd[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:00.500655 systemd-logind[1608]: New session 4 of user core. Oct 9 01:06:00.507470 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 9 01:06:01.185878 sshd[1914]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:01.190785 systemd[1]: sshd@3-49.13.59.7:22-139.178.68.195:33036.service: Deactivated successfully. Oct 9 01:06:01.195208 systemd[1]: session-4.scope: Deactivated successfully. Oct 9 01:06:01.195863 systemd-logind[1608]: Session 4 logged out. Waiting for processes to exit. Oct 9 01:06:01.196995 systemd-logind[1608]: Removed session 4. Oct 9 01:06:01.354661 systemd[1]: Started sshd@4-49.13.59.7:22-139.178.68.195:36086.service - OpenSSH per-connection server daemon (139.178.68.195:36086). Oct 9 01:06:02.189019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 9 01:06:02.195144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:02.323894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:02.327427 (kubelet)[1936]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:02.359935 sshd[1922]: Accepted publickey for core from 139.178.68.195 port 36086 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:06:02.361879 sshd[1922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:02.369045 systemd-logind[1608]: New session 5 of user core. Oct 9 01:06:02.370111 kubelet[1936]: E1009 01:06:02.369895 1936 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:02.378260 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 9 01:06:02.378458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:02.378635 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:02.897833 sudo[1947]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 9 01:06:02.898286 sudo[1947]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:02.915148 sudo[1947]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:03.076809 sshd[1922]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:03.081296 systemd[1]: sshd@4-49.13.59.7:22-139.178.68.195:36086.service: Deactivated successfully. Oct 9 01:06:03.081465 systemd-logind[1608]: Session 5 logged out. Waiting for processes to exit. Oct 9 01:06:03.084168 systemd[1]: session-5.scope: Deactivated successfully. Oct 9 01:06:03.085040 systemd-logind[1608]: Removed session 5. Oct 9 01:06:03.245542 systemd[1]: Started sshd@5-49.13.59.7:22-139.178.68.195:36102.service - OpenSSH per-connection server daemon (139.178.68.195:36102). Oct 9 01:06:04.229680 sshd[1952]: Accepted publickey for core from 139.178.68.195 port 36102 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:06:04.231918 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:04.241201 systemd-logind[1608]: New session 6 of user core. Oct 9 01:06:04.248472 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 9 01:06:04.758155 sudo[1957]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 9 01:06:04.758553 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:04.762907 sudo[1957]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:04.769067 sudo[1956]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 9 01:06:04.769417 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:04.784236 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 9 01:06:04.817398 augenrules[1979]: No rules Oct 9 01:06:04.819059 systemd[1]: audit-rules.service: Deactivated successfully. Oct 9 01:06:04.819354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 9 01:06:04.821116 sudo[1956]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:04.982771 sshd[1952]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:04.986440 systemd[1]: sshd@5-49.13.59.7:22-139.178.68.195:36102.service: Deactivated successfully. Oct 9 01:06:04.991872 systemd[1]: session-6.scope: Deactivated successfully. Oct 9 01:06:04.992933 systemd-logind[1608]: Session 6 logged out. Waiting for processes to exit. Oct 9 01:06:04.994739 systemd-logind[1608]: Removed session 6. Oct 9 01:06:05.150345 systemd[1]: Started sshd@6-49.13.59.7:22-139.178.68.195:36104.service - OpenSSH per-connection server daemon (139.178.68.195:36104). Oct 9 01:06:06.159958 sshd[1988]: Accepted publickey for core from 139.178.68.195 port 36104 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:06:06.162825 sshd[1988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:06:06.171594 systemd-logind[1608]: New session 7 of user core. Oct 9 01:06:06.185603 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 9 01:06:06.689018 sudo[1992]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 9 01:06:06.689447 sudo[1992]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 9 01:06:06.975438 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 9 01:06:06.978565 (dockerd)[2011]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 9 01:06:07.195657 dockerd[2011]: time="2024-10-09T01:06:07.195408045Z" level=info msg="Starting up" Oct 9 01:06:07.283483 dockerd[2011]: time="2024-10-09T01:06:07.283247031Z" level=info msg="Loading containers: start." Oct 9 01:06:07.451008 kernel: Initializing XFRM netlink socket Oct 9 01:06:07.536159 systemd-networkd[1262]: docker0: Link UP Oct 9 01:06:07.566527 dockerd[2011]: time="2024-10-09T01:06:07.566466673Z" level=info msg="Loading containers: done." Oct 9 01:06:07.581130 dockerd[2011]: time="2024-10-09T01:06:07.581096394Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 9 01:06:07.581253 dockerd[2011]: time="2024-10-09T01:06:07.581164229Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Oct 9 01:06:07.581285 dockerd[2011]: time="2024-10-09T01:06:07.581262565Z" level=info msg="Daemon has completed initialization" Oct 9 01:06:07.608791 dockerd[2011]: time="2024-10-09T01:06:07.608755165Z" level=info msg="API listen on /run/docker.sock" Oct 9 01:06:07.609054 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 9 01:06:08.570929 containerd[1635]: time="2024-10-09T01:06:08.570885066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 9 01:06:09.202406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount931839034.mount: Deactivated successfully. Oct 9 01:06:11.091337 containerd[1635]: time="2024-10-09T01:06:11.091285946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:11.092328 containerd[1635]: time="2024-10-09T01:06:11.092282952Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=35213933" Oct 9 01:06:11.093060 containerd[1635]: time="2024-10-09T01:06:11.092969081Z" level=info msg="ImageCreate event name:\"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:11.095047 containerd[1635]: time="2024-10-09T01:06:11.095010597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:11.096276 containerd[1635]: time="2024-10-09T01:06:11.095862414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"35210641\" in 2.524945024s" Oct 9 01:06:11.096276 containerd[1635]: time="2024-10-09T01:06:11.095891311Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:bc1ec5c2b6c60a3b18e7f54a99f0452c038400ecaaa2576931fd5342a0586abb\"" Oct 9 01:06:11.114987 containerd[1635]: time="2024-10-09T01:06:11.114951710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 9 01:06:12.483850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 9 01:06:12.492570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:12.631193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:12.635854 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 9 01:06:12.679754 kubelet[2281]: E1009 01:06:12.679132 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 9 01:06:12.684724 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 9 01:06:12.684970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 9 01:06:13.089231 containerd[1635]: time="2024-10-09T01:06:13.089181330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:13.090427 containerd[1635]: time="2024-10-09T01:06:13.090385279Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=32208693" Oct 9 01:06:13.091726 containerd[1635]: time="2024-10-09T01:06:13.091688666Z" level=info msg="ImageCreate event name:\"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:13.094587 containerd[1635]: time="2024-10-09T01:06:13.094547136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:13.095787 containerd[1635]: time="2024-10-09T01:06:13.095638784Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"33739229\" in 1.980641284s" Oct 9 01:06:13.095787 containerd[1635]: time="2024-10-09T01:06:13.095687240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:5abda0d0a9153cd1f90fd828be379f7a16a6c814e6efbbbf31e247e13c3843e5\"" Oct 9 01:06:13.120079 containerd[1635]: time="2024-10-09T01:06:13.119493767Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 9 01:06:14.294734 containerd[1635]: time="2024-10-09T01:06:14.294683506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:14.295680 containerd[1635]: time="2024-10-09T01:06:14.295562771Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=17320476" Oct 9 01:06:14.296471 containerd[1635]: time="2024-10-09T01:06:14.296423230Z" level=info msg="ImageCreate event name:\"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:14.302062 containerd[1635]: time="2024-10-09T01:06:14.301195933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:14.302552 containerd[1635]: time="2024-10-09T01:06:14.302530730Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"18851030\" in 1.183004236s" Oct 9 01:06:14.302629 containerd[1635]: time="2024-10-09T01:06:14.302615466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:059957505b3370d4c57d793e79cc70f9063d7ab75767f7040f5cc85572fe7e8d\"" Oct 9 01:06:14.322843 containerd[1635]: time="2024-10-09T01:06:14.322805722Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 9 01:06:15.481500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104713129.mount: Deactivated successfully. Oct 9 01:06:15.763431 containerd[1635]: time="2024-10-09T01:06:15.763360829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:15.764414 containerd[1635]: time="2024-10-09T01:06:15.764372674Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=28601776" Oct 9 01:06:15.765289 containerd[1635]: time="2024-10-09T01:06:15.765243722Z" level=info msg="ImageCreate event name:\"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:15.766989 containerd[1635]: time="2024-10-09T01:06:15.766901703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:15.767727 containerd[1635]: time="2024-10-09T01:06:15.767339877Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"28600769\" in 1.444503124s" Oct 9 01:06:15.767727 containerd[1635]: time="2024-10-09T01:06:15.767366189Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:dd650d127e51776919ec1622a4469a8b141b2dfee5a33fbc5cb9729372e0dcfa\"" Oct 9 01:06:15.789291 containerd[1635]: time="2024-10-09T01:06:15.789255905Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 9 01:06:16.324572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134558153.mount: Deactivated successfully. Oct 9 01:06:16.954961 containerd[1635]: time="2024-10-09T01:06:16.954856743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:16.956090 containerd[1635]: time="2024-10-09T01:06:16.956048532Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185841" Oct 9 01:06:16.957011 containerd[1635]: time="2024-10-09T01:06:16.956962062Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:16.959231 containerd[1635]: time="2024-10-09T01:06:16.959183248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:16.960062 containerd[1635]: time="2024-10-09T01:06:16.959934108Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.170643064s" Oct 9 01:06:16.960062 containerd[1635]: time="2024-10-09T01:06:16.959960360Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Oct 9 01:06:16.980996 containerd[1635]: time="2024-10-09T01:06:16.980943286Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 9 01:06:17.461176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1414559174.mount: Deactivated successfully. Oct 9 01:06:17.466245 containerd[1635]: time="2024-10-09T01:06:17.466197881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:17.467078 containerd[1635]: time="2024-10-09T01:06:17.467034608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322310" Oct 9 01:06:17.467819 containerd[1635]: time="2024-10-09T01:06:17.467753043Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:17.469847 containerd[1635]: time="2024-10-09T01:06:17.469808950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:17.471010 containerd[1635]: time="2024-10-09T01:06:17.470593656Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 489.622665ms" Oct 9 01:06:17.471010 containerd[1635]: time="2024-10-09T01:06:17.470622021Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Oct 9 01:06:17.497227 containerd[1635]: time="2024-10-09T01:06:17.497195526Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 9 01:06:18.034857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627161207.mount: Deactivated successfully. Oct 9 01:06:18.793193 systemd[1]: Started sshd@7-49.13.59.7:22-194.169.175.38:23272.service - OpenSSH per-connection server daemon (194.169.175.38:23272). Oct 9 01:06:19.416386 sshd[2419]: Invalid user admin from 194.169.175.38 port 23272 Oct 9 01:06:19.473840 sshd[2419]: Connection closed by invalid user admin 194.169.175.38 port 23272 [preauth] Oct 9 01:06:19.478464 systemd[1]: sshd@7-49.13.59.7:22-194.169.175.38:23272.service: Deactivated successfully. Oct 9 01:06:19.516161 containerd[1635]: time="2024-10-09T01:06:19.516097313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:19.517239 containerd[1635]: time="2024-10-09T01:06:19.517204388Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651705" Oct 9 01:06:19.518059 containerd[1635]: time="2024-10-09T01:06:19.518032516Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:19.520398 containerd[1635]: time="2024-10-09T01:06:19.520359099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:19.521569 containerd[1635]: time="2024-10-09T01:06:19.521336760Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.024107818s" Oct 9 01:06:19.521569 containerd[1635]: time="2024-10-09T01:06:19.521363112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Oct 9 01:06:22.204468 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:22.212328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:22.235158 systemd[1]: Reloading requested from client PID 2495 ('systemctl') (unit session-7.scope)... Oct 9 01:06:22.235171 systemd[1]: Reloading... Oct 9 01:06:22.349010 zram_generator::config[2536]: No configuration found. Oct 9 01:06:22.445937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:22.510201 systemd[1]: Reloading finished in 274 ms. Oct 9 01:06:22.560166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:22.563774 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:22.565051 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:06:22.565320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:22.579746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:22.711368 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:22.714623 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:06:22.753530 kubelet[2604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:22.753530 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:06:22.753530 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:22.754644 kubelet[2604]: I1009 01:06:22.754585 2604 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:06:22.944549 kubelet[2604]: I1009 01:06:22.944437 2604 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:06:22.944549 kubelet[2604]: I1009 01:06:22.944472 2604 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:06:22.944670 kubelet[2604]: I1009 01:06:22.944651 2604 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:06:22.967767 kubelet[2604]: I1009 01:06:22.967548 2604 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:22.969226 kubelet[2604]: E1009 01:06:22.969212 2604 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.59.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:22.977768 kubelet[2604]: I1009 01:06:22.977261 2604 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:06:22.977768 kubelet[2604]: I1009 01:06:22.977616 2604 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:06:22.978640 kubelet[2604]: I1009 01:06:22.978617 2604 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:06:22.980065 kubelet[2604]: I1009 01:06:22.980042 2604 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:06:22.980065 kubelet[2604]: I1009 01:06:22.980060 2604 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:06:22.980184 kubelet[2604]: I1009 01:06:22.980173 2604 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:22.982296 kubelet[2604]: I1009 01:06:22.982052 2604 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:06:22.982296 kubelet[2604]: I1009 01:06:22.982081 2604 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:06:22.982296 kubelet[2604]: I1009 01:06:22.982112 2604 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:06:22.982296 kubelet[2604]: I1009 01:06:22.982125 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:06:22.984215 kubelet[2604]: W1009 01:06:22.983897 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.59.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-f-4ef11beaf3&limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:22.984215 kubelet[2604]: E1009 01:06:22.983941 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.59.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-f-4ef11beaf3&limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:22.984475 kubelet[2604]: W1009 01:06:22.984445 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.59.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:22.984553 kubelet[2604]: E1009 01:06:22.984540 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.59.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:22.984679 kubelet[2604]: I1009 01:06:22.984666 2604 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:06:22.989758 kubelet[2604]: I1009 01:06:22.989742 2604 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:06:22.990670 kubelet[2604]: W1009 01:06:22.990643 2604 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 9 01:06:22.991287 kubelet[2604]: I1009 01:06:22.991135 2604 server.go:1256] "Started kubelet" Oct 9 01:06:22.991287 kubelet[2604]: I1009 01:06:22.991184 2604 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:06:22.992027 kubelet[2604]: I1009 01:06:22.991860 2604 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:06:22.994147 kubelet[2604]: I1009 01:06:22.994109 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:06:22.994781 kubelet[2604]: I1009 01:06:22.994750 2604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:06:22.995017 kubelet[2604]: I1009 01:06:22.994914 2604 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:06:22.997043 kubelet[2604]: E1009 01:06:22.996856 2604 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.59.7:6443/api/v1/namespaces/default/events\": dial tcp 49.13.59.7:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4116-0-0-f-4ef11beaf3.17fca35e6f30541b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4116-0-0-f-4ef11beaf3,UID:ci-4116-0-0-f-4ef11beaf3,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4116-0-0-f-4ef11beaf3,},FirstTimestamp:2024-10-09 01:06:22.991119387 +0000 UTC m=+0.272817487,LastTimestamp:2024-10-09 01:06:22.991119387 +0000 UTC m=+0.272817487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4116-0-0-f-4ef11beaf3,}" Oct 9 01:06:23.001001 kubelet[2604]: I1009 01:06:23.000964 2604 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:06:23.003082 kubelet[2604]: E1009 01:06:23.003062 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.59.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-f-4ef11beaf3?timeout=10s\": dial tcp 49.13.59.7:6443: connect: connection refused" interval="200ms" Oct 9 01:06:23.003851 kubelet[2604]: I1009 01:06:23.003167 2604 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:06:23.003851 kubelet[2604]: I1009 01:06:23.003229 2604 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:06:23.004583 kubelet[2604]: I1009 01:06:23.004550 2604 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:06:23.004644 kubelet[2604]: I1009 01:06:23.004598 2604 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:06:23.005772 kubelet[2604]: W1009 01:06:23.005495 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.59.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:23.005772 kubelet[2604]: E1009 01:06:23.005524 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.59.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:23.006029 kubelet[2604]: E1009 01:06:23.005958 2604 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 9 01:06:23.006325 kubelet[2604]: I1009 01:06:23.006308 2604 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:06:23.015887 kubelet[2604]: I1009 01:06:23.015803 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:06:23.017059 kubelet[2604]: I1009 01:06:23.017047 2604 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:06:23.017342 kubelet[2604]: I1009 01:06:23.017112 2604 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:06:23.017342 kubelet[2604]: I1009 01:06:23.017130 2604 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:06:23.017342 kubelet[2604]: E1009 01:06:23.017170 2604 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:06:23.024515 kubelet[2604]: W1009 01:06:23.024480 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.59.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:23.024572 kubelet[2604]: E1009 01:06:23.024527 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.59.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:23.034624 kubelet[2604]: I1009 01:06:23.034611 2604 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:06:23.034716 kubelet[2604]: I1009 01:06:23.034705 2604 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:06:23.034924 kubelet[2604]: I1009 01:06:23.034764 2604 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:23.036715 kubelet[2604]: I1009 01:06:23.036654 2604 policy_none.go:49] "None policy: Start" Oct 9 01:06:23.037093 kubelet[2604]: I1009 01:06:23.037072 2604 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:06:23.037093 kubelet[2604]: I1009 01:06:23.037094 2604 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:06:23.041226 kubelet[2604]: I1009 01:06:23.041202 2604 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:06:23.041414 kubelet[2604]: I1009 01:06:23.041394 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:06:23.045067 kubelet[2604]: E1009 01:06:23.045045 2604 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:23.102724 kubelet[2604]: I1009 01:06:23.102692 2604 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.103036 kubelet[2604]: E1009 01:06:23.103022 2604 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.59.7:6443/api/v1/nodes\": dial tcp 49.13.59.7:6443: connect: connection refused" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.117641 kubelet[2604]: I1009 01:06:23.117606 2604 topology_manager.go:215] "Topology Admit Handler" podUID="59524686ef5fa36d3383f4426b5ac091" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.119445 kubelet[2604]: I1009 01:06:23.119411 2604 topology_manager.go:215] "Topology Admit Handler" podUID="0f32c5f55ea3c09dab3ec599863f6c53" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.121126 kubelet[2604]: I1009 01:06:23.121090 2604 topology_manager.go:215] "Topology Admit Handler" podUID="6a00ff90a4036e8be39af5df78be79e2" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.205011 kubelet[2604]: E1009 01:06:23.204779 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.59.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-f-4ef11beaf3?timeout=10s\": dial tcp 49.13.59.7:6443: connect: connection refused" interval="400ms" Oct 9 01:06:23.207032 kubelet[2604]: I1009 01:06:23.206633 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59524686ef5fa36d3383f4426b5ac091-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" (UID: \"59524686ef5fa36d3383f4426b5ac091\") " pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207032 kubelet[2604]: I1009 01:06:23.206689 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207032 kubelet[2604]: I1009 01:06:23.206761 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a00ff90a4036e8be39af5df78be79e2-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-f-4ef11beaf3\" (UID: \"6a00ff90a4036e8be39af5df78be79e2\") " pod="kube-system/kube-scheduler-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207032 kubelet[2604]: I1009 01:06:23.206793 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59524686ef5fa36d3383f4426b5ac091-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" (UID: \"59524686ef5fa36d3383f4426b5ac091\") " pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207032 kubelet[2604]: I1009 01:06:23.206827 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59524686ef5fa36d3383f4426b5ac091-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" (UID: \"59524686ef5fa36d3383f4426b5ac091\") " pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207364 kubelet[2604]: I1009 01:06:23.206858 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207364 kubelet[2604]: I1009 01:06:23.206889 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207364 kubelet[2604]: I1009 01:06:23.206923 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.207364 kubelet[2604]: I1009 01:06:23.206959 2604 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.306494 kubelet[2604]: I1009 01:06:23.306416 2604 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.307431 kubelet[2604]: E1009 01:06:23.307002 2604 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.59.7:6443/api/v1/nodes\": dial tcp 49.13.59.7:6443: connect: connection refused" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.427987 containerd[1635]: time="2024-10-09T01:06:23.427893767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-f-4ef11beaf3,Uid:59524686ef5fa36d3383f4426b5ac091,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:23.433399 containerd[1635]: time="2024-10-09T01:06:23.433283942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-f-4ef11beaf3,Uid:0f32c5f55ea3c09dab3ec599863f6c53,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:23.435172 containerd[1635]: time="2024-10-09T01:06:23.434812316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-f-4ef11beaf3,Uid:6a00ff90a4036e8be39af5df78be79e2,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:23.605920 kubelet[2604]: E1009 01:06:23.605871 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.59.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-f-4ef11beaf3?timeout=10s\": dial tcp 49.13.59.7:6443: connect: connection refused" interval="800ms" Oct 9 01:06:23.709348 kubelet[2604]: I1009 01:06:23.709313 2604 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.709646 kubelet[2604]: E1009 01:06:23.709611 2604 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.59.7:6443/api/v1/nodes\": dial tcp 49.13.59.7:6443: connect: connection refused" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:23.821405 kubelet[2604]: W1009 01:06:23.821294 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.59.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:23.821405 kubelet[2604]: E1009 01:06:23.821386 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.59.7:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:24.001431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540208160.mount: Deactivated successfully. Oct 9 01:06:24.008318 containerd[1635]: time="2024-10-09T01:06:24.008221858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:24.009074 containerd[1635]: time="2024-10-09T01:06:24.009043397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:24.009912 containerd[1635]: time="2024-10-09T01:06:24.009869865Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312076" Oct 9 01:06:24.010471 containerd[1635]: time="2024-10-09T01:06:24.010433890Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:06:24.011179 containerd[1635]: time="2024-10-09T01:06:24.011147147Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:24.012413 containerd[1635]: time="2024-10-09T01:06:24.012364512Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 9 01:06:24.013009 containerd[1635]: time="2024-10-09T01:06:24.012940640Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:24.017073 containerd[1635]: time="2024-10-09T01:06:24.016988069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 9 01:06:24.017953 containerd[1635]: time="2024-10-09T01:06:24.017798827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.72649ms" Oct 9 01:06:24.019631 containerd[1635]: time="2024-10-09T01:06:24.019576379Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 584.667482ms" Oct 9 01:06:24.028597 containerd[1635]: time="2024-10-09T01:06:24.028558773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.150979ms" Oct 9 01:06:24.054867 kubelet[2604]: W1009 01:06:24.054825 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.59.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:24.057004 kubelet[2604]: E1009 01:06:24.055028 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.59.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:24.089572 kubelet[2604]: W1009 01:06:24.089468 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.59.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-f-4ef11beaf3&limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:24.089572 kubelet[2604]: E1009 01:06:24.089516 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.59.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4116-0-0-f-4ef11beaf3&limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:24.117367 containerd[1635]: time="2024-10-09T01:06:24.116592855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:24.117367 containerd[1635]: time="2024-10-09T01:06:24.116632593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:24.117367 containerd[1635]: time="2024-10-09T01:06:24.116645087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:24.117367 containerd[1635]: time="2024-10-09T01:06:24.116712259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:24.125821 containerd[1635]: time="2024-10-09T01:06:24.125548106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:24.126012 containerd[1635]: time="2024-10-09T01:06:24.125858775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:24.126012 containerd[1635]: time="2024-10-09T01:06:24.125918662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:24.126653 containerd[1635]: time="2024-10-09T01:06:24.126569888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:24.128620 containerd[1635]: time="2024-10-09T01:06:24.128430864Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:24.128620 containerd[1635]: time="2024-10-09T01:06:24.128468287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:24.128620 containerd[1635]: time="2024-10-09T01:06:24.128481492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:24.128620 containerd[1635]: time="2024-10-09T01:06:24.128547672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:24.216178 containerd[1635]: time="2024-10-09T01:06:24.215354500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4116-0-0-f-4ef11beaf3,Uid:59524686ef5fa36d3383f4426b5ac091,Namespace:kube-system,Attempt:0,} returns sandbox id \"aef498e673fc8e5f704b509766f4d86985f32c2d17ad92b885553e5f410ba1ba\"" Oct 9 01:06:24.218463 containerd[1635]: time="2024-10-09T01:06:24.218439563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4116-0-0-f-4ef11beaf3,Uid:6a00ff90a4036e8be39af5df78be79e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c239477af478b93b972662b61265866fc439efa48834f2f6071ffb025ea9dcdf\"" Oct 9 01:06:24.221723 containerd[1635]: time="2024-10-09T01:06:24.221694519Z" level=info msg="CreateContainer within sandbox \"aef498e673fc8e5f704b509766f4d86985f32c2d17ad92b885553e5f410ba1ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 9 01:06:24.224048 containerd[1635]: time="2024-10-09T01:06:24.224028170Z" level=info msg="CreateContainer within sandbox \"c239477af478b93b972662b61265866fc439efa48834f2f6071ffb025ea9dcdf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 9 01:06:24.228882 containerd[1635]: time="2024-10-09T01:06:24.228857801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4116-0-0-f-4ef11beaf3,Uid:0f32c5f55ea3c09dab3ec599863f6c53,Namespace:kube-system,Attempt:0,} returns sandbox id \"6793109fd3c31f33567480e41df2ca3755f033e9af0c19a5785d6277e8a4f558\"" Oct 9 01:06:24.231239 containerd[1635]: time="2024-10-09T01:06:24.231212873Z" level=info msg="CreateContainer within sandbox \"6793109fd3c31f33567480e41df2ca3755f033e9af0c19a5785d6277e8a4f558\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 9 01:06:24.242717 containerd[1635]: time="2024-10-09T01:06:24.242687270Z" level=info msg="CreateContainer within sandbox \"c239477af478b93b972662b61265866fc439efa48834f2f6071ffb025ea9dcdf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2ba1305da4419d8683698d133f684b9f6da0e0ec8ee27885327fdf03a8b4b4bc\"" Oct 9 01:06:24.243127 containerd[1635]: time="2024-10-09T01:06:24.243085480Z" level=info msg="StartContainer for \"2ba1305da4419d8683698d133f684b9f6da0e0ec8ee27885327fdf03a8b4b4bc\"" Oct 9 01:06:24.245605 containerd[1635]: time="2024-10-09T01:06:24.245552312Z" level=info msg="CreateContainer within sandbox \"6793109fd3c31f33567480e41df2ca3755f033e9af0c19a5785d6277e8a4f558\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"20adc15293252059447f2f9e52410be97e9ba509b1dd922bfe6328fce8b5e661\"" Oct 9 01:06:24.246132 containerd[1635]: time="2024-10-09T01:06:24.246069114Z" level=info msg="StartContainer for \"20adc15293252059447f2f9e52410be97e9ba509b1dd922bfe6328fce8b5e661\"" Oct 9 01:06:24.246464 containerd[1635]: time="2024-10-09T01:06:24.246445611Z" level=info msg="CreateContainer within sandbox \"aef498e673fc8e5f704b509766f4d86985f32c2d17ad92b885553e5f410ba1ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5322bcab3edda41c2b7cc292587628db8e7b2a2e6e2e045d776c8e3e60af7ef\"" Oct 9 01:06:24.247617 containerd[1635]: time="2024-10-09T01:06:24.247505838Z" level=info msg="StartContainer for \"d5322bcab3edda41c2b7cc292587628db8e7b2a2e6e2e045d776c8e3e60af7ef\"" Oct 9 01:06:24.260752 kubelet[2604]: W1009 01:06:24.260666 2604 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.59.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:24.261154 kubelet[2604]: E1009 01:06:24.260842 2604 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.59.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.59.7:6443: connect: connection refused Oct 9 01:06:24.333605 containerd[1635]: time="2024-10-09T01:06:24.333485215Z" level=info msg="StartContainer for \"20adc15293252059447f2f9e52410be97e9ba509b1dd922bfe6328fce8b5e661\" returns successfully" Oct 9 01:06:24.348836 containerd[1635]: time="2024-10-09T01:06:24.347886715Z" level=info msg="StartContainer for \"d5322bcab3edda41c2b7cc292587628db8e7b2a2e6e2e045d776c8e3e60af7ef\" returns successfully" Oct 9 01:06:24.368022 containerd[1635]: time="2024-10-09T01:06:24.366789111Z" level=info msg="StartContainer for \"2ba1305da4419d8683698d133f684b9f6da0e0ec8ee27885327fdf03a8b4b4bc\" returns successfully" Oct 9 01:06:24.407071 kubelet[2604]: E1009 01:06:24.406780 2604 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.59.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4116-0-0-f-4ef11beaf3?timeout=10s\": dial tcp 49.13.59.7:6443: connect: connection refused" interval="1.6s" Oct 9 01:06:24.513422 kubelet[2604]: I1009 01:06:24.513332 2604 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:24.514078 kubelet[2604]: E1009 01:06:24.513726 2604 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.59.7:6443/api/v1/nodes\": dial tcp 49.13.59.7:6443: connect: connection refused" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:25.995046 kubelet[2604]: E1009 01:06:25.994933 2604 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4116-0-0-f-4ef11beaf3" not found Oct 9 01:06:26.010738 kubelet[2604]: E1009 01:06:26.010670 2604 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4116-0-0-f-4ef11beaf3\" not found" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:26.115748 kubelet[2604]: I1009 01:06:26.115712 2604 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:26.124585 kubelet[2604]: I1009 01:06:26.124528 2604 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:26.132310 kubelet[2604]: E1009 01:06:26.132284 2604 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:26.232682 kubelet[2604]: E1009 01:06:26.232642 2604 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:26.333475 kubelet[2604]: E1009 01:06:26.333336 2604 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:26.434511 kubelet[2604]: E1009 01:06:26.434454 2604 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:26.535137 kubelet[2604]: E1009 01:06:26.535086 2604 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:26.635939 kubelet[2604]: E1009 01:06:26.635798 2604 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:26.736662 kubelet[2604]: E1009 01:06:26.736593 2604 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4116-0-0-f-4ef11beaf3\" not found" Oct 9 01:06:26.986539 kubelet[2604]: I1009 01:06:26.986499 2604 apiserver.go:52] "Watching apiserver" Oct 9 01:06:27.005595 kubelet[2604]: I1009 01:06:27.005556 2604 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:06:28.323652 systemd[1]: Reloading requested from client PID 2874 ('systemctl') (unit session-7.scope)... Oct 9 01:06:28.323675 systemd[1]: Reloading... Oct 9 01:06:28.402009 zram_generator::config[2910]: No configuration found. Oct 9 01:06:28.511229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 9 01:06:28.583363 systemd[1]: Reloading finished in 259 ms. Oct 9 01:06:28.623776 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:28.624220 kubelet[2604]: I1009 01:06:28.623784 2604 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:28.637620 systemd[1]: kubelet.service: Deactivated successfully. Oct 9 01:06:28.637968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:28.644458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 9 01:06:28.783177 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 9 01:06:28.786445 (kubelet)[2975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 9 01:06:28.841010 kubelet[2975]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:28.841010 kubelet[2975]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 9 01:06:28.841010 kubelet[2975]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 9 01:06:28.841010 kubelet[2975]: I1009 01:06:28.840760 2975 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 9 01:06:28.847895 kubelet[2975]: I1009 01:06:28.846210 2975 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 9 01:06:28.847895 kubelet[2975]: I1009 01:06:28.846226 2975 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 9 01:06:28.847895 kubelet[2975]: I1009 01:06:28.846348 2975 server.go:919] "Client rotation is on, will bootstrap in background" Oct 9 01:06:28.847895 kubelet[2975]: I1009 01:06:28.847491 2975 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 9 01:06:28.851962 kubelet[2975]: I1009 01:06:28.851948 2975 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 9 01:06:28.858828 kubelet[2975]: I1009 01:06:28.858803 2975 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 9 01:06:28.859398 kubelet[2975]: I1009 01:06:28.859385 2975 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 9 01:06:28.863817 kubelet[2975]: I1009 01:06:28.863795 2975 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 9 01:06:28.863947 kubelet[2975]: I1009 01:06:28.863935 2975 topology_manager.go:138] "Creating topology manager with none policy" Oct 9 01:06:28.864050 kubelet[2975]: I1009 01:06:28.864034 2975 container_manager_linux.go:301] "Creating device plugin manager" Oct 9 01:06:28.864138 kubelet[2975]: I1009 01:06:28.864129 2975 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:28.864279 kubelet[2975]: I1009 01:06:28.864268 2975 kubelet.go:396] "Attempting to sync node with API server" Oct 9 01:06:28.864337 kubelet[2975]: I1009 01:06:28.864328 2975 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 9 01:06:28.864639 kubelet[2975]: I1009 01:06:28.864609 2975 kubelet.go:312] "Adding apiserver pod source" Oct 9 01:06:28.864678 kubelet[2975]: I1009 01:06:28.864674 2975 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 9 01:06:28.869015 kubelet[2975]: I1009 01:06:28.868087 2975 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.22" apiVersion="v1" Oct 9 01:06:28.869015 kubelet[2975]: I1009 01:06:28.868322 2975 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 9 01:06:28.869015 kubelet[2975]: I1009 01:06:28.868703 2975 server.go:1256] "Started kubelet" Oct 9 01:06:28.875079 kubelet[2975]: I1009 01:06:28.874611 2975 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 9 01:06:28.885051 kubelet[2975]: I1009 01:06:28.885030 2975 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 9 01:06:28.885739 kubelet[2975]: I1009 01:06:28.885726 2975 server.go:461] "Adding debug handlers to kubelet server" Oct 9 01:06:28.886539 kubelet[2975]: I1009 01:06:28.886520 2975 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 9 01:06:28.886760 kubelet[2975]: I1009 01:06:28.886748 2975 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 9 01:06:28.890813 kubelet[2975]: I1009 01:06:28.890798 2975 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 9 01:06:28.891540 kubelet[2975]: I1009 01:06:28.891526 2975 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 9 01:06:28.892090 kubelet[2975]: I1009 01:06:28.892078 2975 reconciler_new.go:29] "Reconciler: start to sync state" Oct 9 01:06:28.898357 kubelet[2975]: I1009 01:06:28.897676 2975 factory.go:221] Registration of the systemd container factory successfully Oct 9 01:06:28.898357 kubelet[2975]: I1009 01:06:28.897750 2975 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 9 01:06:28.900056 kubelet[2975]: I1009 01:06:28.900044 2975 factory.go:221] Registration of the containerd container factory successfully Oct 9 01:06:28.909562 kubelet[2975]: I1009 01:06:28.908702 2975 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 9 01:06:28.911032 kubelet[2975]: I1009 01:06:28.910961 2975 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 9 01:06:28.911137 kubelet[2975]: I1009 01:06:28.911124 2975 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 9 01:06:28.911203 kubelet[2975]: I1009 01:06:28.911194 2975 kubelet.go:2329] "Starting kubelet main sync loop" Oct 9 01:06:28.911285 kubelet[2975]: E1009 01:06:28.911275 2975 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.976299 2975 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.976333 2975 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.976348 2975 state_mem.go:36] "Initialized new in-memory state store" Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.976459 2975 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.976476 2975 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.976483 2975 policy_none.go:49] "None policy: Start" Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.977200 2975 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 9 01:06:28.977255 kubelet[2975]: I1009 01:06:28.977216 2975 state_mem.go:35] "Initializing new in-memory state store" Oct 9 01:06:28.977508 kubelet[2975]: I1009 01:06:28.977357 2975 state_mem.go:75] "Updated machine memory state" Oct 9 01:06:28.978856 kubelet[2975]: I1009 01:06:28.978834 2975 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 9 01:06:28.980903 kubelet[2975]: I1009 01:06:28.980206 2975 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 9 01:06:28.997694 kubelet[2975]: I1009 01:06:28.997116 2975 kubelet_node_status.go:73] "Attempting to register node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.004278 kubelet[2975]: I1009 01:06:29.003421 2975 kubelet_node_status.go:112] "Node was previously registered" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.004392 kubelet[2975]: I1009 01:06:29.004380 2975 kubelet_node_status.go:76] "Successfully registered node" node="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.012368 kubelet[2975]: I1009 01:06:29.012338 2975 topology_manager.go:215] "Topology Admit Handler" podUID="59524686ef5fa36d3383f4426b5ac091" podNamespace="kube-system" podName="kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.012446 kubelet[2975]: I1009 01:06:29.012417 2975 topology_manager.go:215] "Topology Admit Handler" podUID="0f32c5f55ea3c09dab3ec599863f6c53" podNamespace="kube-system" podName="kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.012475 kubelet[2975]: I1009 01:06:29.012450 2975 topology_manager.go:215] "Topology Admit Handler" podUID="6a00ff90a4036e8be39af5df78be79e2" podNamespace="kube-system" podName="kube-scheduler-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.023071 kubelet[2975]: E1009 01:06:29.022226 2975 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" already exists" pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.023071 kubelet[2975]: E1009 01:06:29.022305 2975 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" already exists" pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.093638 kubelet[2975]: I1009 01:06:29.093451 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-ca-certs\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.093638 kubelet[2975]: I1009 01:06:29.093521 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-kubeconfig\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.093638 kubelet[2975]: I1009 01:06:29.093558 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.093638 kubelet[2975]: I1009 01:06:29.093577 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a00ff90a4036e8be39af5df78be79e2-kubeconfig\") pod \"kube-scheduler-ci-4116-0-0-f-4ef11beaf3\" (UID: \"6a00ff90a4036e8be39af5df78be79e2\") " pod="kube-system/kube-scheduler-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.093638 kubelet[2975]: I1009 01:06:29.093595 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/59524686ef5fa36d3383f4426b5ac091-k8s-certs\") pod \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" (UID: \"59524686ef5fa36d3383f4426b5ac091\") " pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.093959 kubelet[2975]: I1009 01:06:29.093612 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/59524686ef5fa36d3383f4426b5ac091-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" (UID: \"59524686ef5fa36d3383f4426b5ac091\") " pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.094930 kubelet[2975]: I1009 01:06:29.094746 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-flexvolume-dir\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.094930 kubelet[2975]: I1009 01:06:29.094813 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f32c5f55ea3c09dab3ec599863f6c53-k8s-certs\") pod \"kube-controller-manager-ci-4116-0-0-f-4ef11beaf3\" (UID: \"0f32c5f55ea3c09dab3ec599863f6c53\") " pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.094930 kubelet[2975]: I1009 01:06:29.094852 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/59524686ef5fa36d3383f4426b5ac091-ca-certs\") pod \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" (UID: \"59524686ef5fa36d3383f4426b5ac091\") " pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:29.866629 kubelet[2975]: I1009 01:06:29.866513 2975 apiserver.go:52] "Watching apiserver" Oct 9 01:06:29.892769 kubelet[2975]: I1009 01:06:29.892710 2975 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 9 01:06:29.971158 kubelet[2975]: E1009 01:06:29.968379 2975 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4116-0-0-f-4ef11beaf3\" already exists" pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" Oct 9 01:06:30.038619 kubelet[2975]: I1009 01:06:30.038588 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4116-0-0-f-4ef11beaf3" podStartSLOduration=3.038547365 podStartE2EDuration="3.038547365s" podCreationTimestamp="2024-10-09 01:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:30.006541299 +0000 UTC m=+1.214785906" watchObservedRunningTime="2024-10-09 01:06:30.038547365 +0000 UTC m=+1.246791973" Oct 9 01:06:30.062478 kubelet[2975]: I1009 01:06:30.062221 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4116-0-0-f-4ef11beaf3" podStartSLOduration=3.062174756 podStartE2EDuration="3.062174756s" podCreationTimestamp="2024-10-09 01:06:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:30.045063293 +0000 UTC m=+1.253307900" watchObservedRunningTime="2024-10-09 01:06:30.062174756 +0000 UTC m=+1.270419364" Oct 9 01:06:30.062478 kubelet[2975]: I1009 01:06:30.062291 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4116-0-0-f-4ef11beaf3" podStartSLOduration=1.062277537 podStartE2EDuration="1.062277537s" podCreationTimestamp="2024-10-09 01:06:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:30.060266869 +0000 UTC m=+1.268511477" watchObservedRunningTime="2024-10-09 01:06:30.062277537 +0000 UTC m=+1.270522144" Oct 9 01:06:33.197410 sudo[1992]: pam_unix(sudo:session): session closed for user root Oct 9 01:06:33.359328 sshd[1988]: pam_unix(sshd:session): session closed for user core Oct 9 01:06:33.363734 systemd[1]: sshd@6-49.13.59.7:22-139.178.68.195:36104.service: Deactivated successfully. Oct 9 01:06:33.368051 systemd-logind[1608]: Session 7 logged out. Waiting for processes to exit. Oct 9 01:06:33.368133 systemd[1]: session-7.scope: Deactivated successfully. Oct 9 01:06:33.370346 systemd-logind[1608]: Removed session 7. Oct 9 01:06:43.340318 kubelet[2975]: I1009 01:06:43.340263 2975 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 9 01:06:43.341294 kubelet[2975]: I1009 01:06:43.340963 2975 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 9 01:06:43.341345 containerd[1635]: time="2024-10-09T01:06:43.340789221Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 9 01:06:43.687083 kubelet[2975]: I1009 01:06:43.686799 2975 topology_manager.go:215] "Topology Admit Handler" podUID="e2f8b8fa-8c8b-4cbd-8261-946f83c2add3" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-klssd" Oct 9 01:06:43.765612 kubelet[2975]: I1009 01:06:43.765581 2975 topology_manager.go:215] "Topology Admit Handler" podUID="14f3f0c8-31db-4994-8583-2faa3830b844" podNamespace="kube-system" podName="kube-proxy-hh6q5" Oct 9 01:06:43.795286 kubelet[2975]: I1009 01:06:43.795255 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4nb\" (UniqueName: \"kubernetes.io/projected/14f3f0c8-31db-4994-8583-2faa3830b844-kube-api-access-fd4nb\") pod \"kube-proxy-hh6q5\" (UID: \"14f3f0c8-31db-4994-8583-2faa3830b844\") " pod="kube-system/kube-proxy-hh6q5" Oct 9 01:06:43.795286 kubelet[2975]: I1009 01:06:43.795292 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2f8b8fa-8c8b-4cbd-8261-946f83c2add3-var-lib-calico\") pod \"tigera-operator-5d56685c77-klssd\" (UID: \"e2f8b8fa-8c8b-4cbd-8261-946f83c2add3\") " pod="tigera-operator/tigera-operator-5d56685c77-klssd" Oct 9 01:06:43.795286 kubelet[2975]: I1009 01:06:43.795310 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/14f3f0c8-31db-4994-8583-2faa3830b844-kube-proxy\") pod \"kube-proxy-hh6q5\" (UID: \"14f3f0c8-31db-4994-8583-2faa3830b844\") " pod="kube-system/kube-proxy-hh6q5" Oct 9 01:06:43.795286 kubelet[2975]: I1009 01:06:43.795328 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14f3f0c8-31db-4994-8583-2faa3830b844-xtables-lock\") pod \"kube-proxy-hh6q5\" (UID: \"14f3f0c8-31db-4994-8583-2faa3830b844\") " pod="kube-system/kube-proxy-hh6q5" Oct 9 01:06:43.795556 kubelet[2975]: I1009 01:06:43.795364 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv25t\" (UniqueName: \"kubernetes.io/projected/e2f8b8fa-8c8b-4cbd-8261-946f83c2add3-kube-api-access-fv25t\") pod \"tigera-operator-5d56685c77-klssd\" (UID: \"e2f8b8fa-8c8b-4cbd-8261-946f83c2add3\") " pod="tigera-operator/tigera-operator-5d56685c77-klssd" Oct 9 01:06:43.795556 kubelet[2975]: I1009 01:06:43.795382 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14f3f0c8-31db-4994-8583-2faa3830b844-lib-modules\") pod \"kube-proxy-hh6q5\" (UID: \"14f3f0c8-31db-4994-8583-2faa3830b844\") " pod="kube-system/kube-proxy-hh6q5" Oct 9 01:06:43.991785 containerd[1635]: time="2024-10-09T01:06:43.991748850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-klssd,Uid:e2f8b8fa-8c8b-4cbd-8261-946f83c2add3,Namespace:tigera-operator,Attempt:0,}" Oct 9 01:06:44.014102 containerd[1635]: time="2024-10-09T01:06:44.014006217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:44.014102 containerd[1635]: time="2024-10-09T01:06:44.014061319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:44.014102 containerd[1635]: time="2024-10-09T01:06:44.014073361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:44.014371 containerd[1635]: time="2024-10-09T01:06:44.014156344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:44.068323 containerd[1635]: time="2024-10-09T01:06:44.068270085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-klssd,Uid:e2f8b8fa-8c8b-4cbd-8261-946f83c2add3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"16fc4306eea06da792706221332bda2e7ef25d9d835a5a76ec4afac1b22f7c69\"" Oct 9 01:06:44.069765 containerd[1635]: time="2024-10-09T01:06:44.069719660Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 9 01:06:44.071568 containerd[1635]: time="2024-10-09T01:06:44.071536227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hh6q5,Uid:14f3f0c8-31db-4994-8583-2faa3830b844,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:44.092636 containerd[1635]: time="2024-10-09T01:06:44.092323010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:44.092636 containerd[1635]: time="2024-10-09T01:06:44.092427802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:44.092636 containerd[1635]: time="2024-10-09T01:06:44.092442288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:44.092636 containerd[1635]: time="2024-10-09T01:06:44.092553782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:44.130739 containerd[1635]: time="2024-10-09T01:06:44.130577175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hh6q5,Uid:14f3f0c8-31db-4994-8583-2faa3830b844,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0d4c9b3178194a11c6f05a0c1c3d2ff956c6851b63ae8fa829fb3096267bf7e\"" Oct 9 01:06:44.132951 containerd[1635]: time="2024-10-09T01:06:44.132859324Z" level=info msg="CreateContainer within sandbox \"f0d4c9b3178194a11c6f05a0c1c3d2ff956c6851b63ae8fa829fb3096267bf7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 9 01:06:44.144355 containerd[1635]: time="2024-10-09T01:06:44.144285973Z" level=info msg="CreateContainer within sandbox \"f0d4c9b3178194a11c6f05a0c1c3d2ff956c6851b63ae8fa829fb3096267bf7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"964df399bd1d8501591237bb3976463ee2e30018fb566d12f8486432d1c8b8a9\"" Oct 9 01:06:44.144700 containerd[1635]: time="2024-10-09T01:06:44.144645060Z" level=info msg="StartContainer for \"964df399bd1d8501591237bb3976463ee2e30018fb566d12f8486432d1c8b8a9\"" Oct 9 01:06:44.212589 containerd[1635]: time="2024-10-09T01:06:44.212540776Z" level=info msg="StartContainer for \"964df399bd1d8501591237bb3976463ee2e30018fb566d12f8486432d1c8b8a9\" returns successfully" Oct 9 01:06:44.908200 systemd[1]: run-containerd-runc-k8s.io-16fc4306eea06da792706221332bda2e7ef25d9d835a5a76ec4afac1b22f7c69-runc.bqakjV.mount: Deactivated successfully. Oct 9 01:06:44.996795 kubelet[2975]: I1009 01:06:44.996741 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-hh6q5" podStartSLOduration=1.996708333 podStartE2EDuration="1.996708333s" podCreationTimestamp="2024-10-09 01:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:06:44.996512665 +0000 UTC m=+16.204757282" watchObservedRunningTime="2024-10-09 01:06:44.996708333 +0000 UTC m=+16.204952950" Oct 9 01:06:45.479619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1275260474.mount: Deactivated successfully. Oct 9 01:06:45.822532 containerd[1635]: time="2024-10-09T01:06:45.822482085Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:45.823650 containerd[1635]: time="2024-10-09T01:06:45.823612410Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=22136529" Oct 9 01:06:45.824860 containerd[1635]: time="2024-10-09T01:06:45.824822890Z" level=info msg="ImageCreate event name:\"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:45.827014 containerd[1635]: time="2024-10-09T01:06:45.826993702Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:45.827636 containerd[1635]: time="2024-10-09T01:06:45.827496554Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"22130728\" in 1.757748223s" Oct 9 01:06:45.827636 containerd[1635]: time="2024-10-09T01:06:45.827521260Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:d4e6e064c25d51e66b2470e80d7b57004f79e2a76b37e83986577f8666da9736\"" Oct 9 01:06:45.829654 containerd[1635]: time="2024-10-09T01:06:45.829629779Z" level=info msg="CreateContainer within sandbox \"16fc4306eea06da792706221332bda2e7ef25d9d835a5a76ec4afac1b22f7c69\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 9 01:06:45.841553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697213695.mount: Deactivated successfully. Oct 9 01:06:45.846520 containerd[1635]: time="2024-10-09T01:06:45.846443213Z" level=info msg="CreateContainer within sandbox \"16fc4306eea06da792706221332bda2e7ef25d9d835a5a76ec4afac1b22f7c69\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ad1d5cbc22f7af64cc9e29617adc45837a478ad3b1127cd277e25bcaa190c0c0\"" Oct 9 01:06:45.847883 containerd[1635]: time="2024-10-09T01:06:45.847136754Z" level=info msg="StartContainer for \"ad1d5cbc22f7af64cc9e29617adc45837a478ad3b1127cd277e25bcaa190c0c0\"" Oct 9 01:06:45.903226 containerd[1635]: time="2024-10-09T01:06:45.903193791Z" level=info msg="StartContainer for \"ad1d5cbc22f7af64cc9e29617adc45837a478ad3b1127cd277e25bcaa190c0c0\" returns successfully" Oct 9 01:06:48.892009 kubelet[2975]: I1009 01:06:48.891793 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-klssd" podStartSLOduration=4.13318801 podStartE2EDuration="5.891740196s" podCreationTimestamp="2024-10-09 01:06:43 +0000 UTC" firstStartedPulling="2024-10-09 01:06:44.069273212 +0000 UTC m=+15.277517819" lastFinishedPulling="2024-10-09 01:06:45.827825388 +0000 UTC m=+17.036070005" observedRunningTime="2024-10-09 01:06:45.997484919 +0000 UTC m=+17.205729527" watchObservedRunningTime="2024-10-09 01:06:48.891740196 +0000 UTC m=+20.099984804" Oct 9 01:06:48.895097 kubelet[2975]: I1009 01:06:48.892547 2975 topology_manager.go:215] "Topology Admit Handler" podUID="4f82bb2d-4147-44c4-8b18-34ccc0181473" podNamespace="calico-system" podName="calico-typha-f68cdd44d-gcs57" Oct 9 01:06:48.930947 kubelet[2975]: I1009 01:06:48.930208 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4f82bb2d-4147-44c4-8b18-34ccc0181473-typha-certs\") pod \"calico-typha-f68cdd44d-gcs57\" (UID: \"4f82bb2d-4147-44c4-8b18-34ccc0181473\") " pod="calico-system/calico-typha-f68cdd44d-gcs57" Oct 9 01:06:48.930947 kubelet[2975]: I1009 01:06:48.930262 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f82bb2d-4147-44c4-8b18-34ccc0181473-tigera-ca-bundle\") pod \"calico-typha-f68cdd44d-gcs57\" (UID: \"4f82bb2d-4147-44c4-8b18-34ccc0181473\") " pod="calico-system/calico-typha-f68cdd44d-gcs57" Oct 9 01:06:48.930947 kubelet[2975]: I1009 01:06:48.930283 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7pr5\" (UniqueName: \"kubernetes.io/projected/4f82bb2d-4147-44c4-8b18-34ccc0181473-kube-api-access-c7pr5\") pod \"calico-typha-f68cdd44d-gcs57\" (UID: \"4f82bb2d-4147-44c4-8b18-34ccc0181473\") " pod="calico-system/calico-typha-f68cdd44d-gcs57" Oct 9 01:06:48.947001 kubelet[2975]: I1009 01:06:48.946623 2975 topology_manager.go:215] "Topology Admit Handler" podUID="c8a65590-905f-4e66-a34f-0664b9295cc4" podNamespace="calico-system" podName="calico-node-ksljv" Oct 9 01:06:49.030590 kubelet[2975]: I1009 01:06:49.030527 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-policysync\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030590 kubelet[2975]: I1009 01:06:49.030589 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-var-lib-calico\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030747 kubelet[2975]: I1009 01:06:49.030610 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-cni-net-dir\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030747 kubelet[2975]: I1009 01:06:49.030633 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-xtables-lock\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030747 kubelet[2975]: I1009 01:06:49.030659 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-var-run-calico\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030747 kubelet[2975]: I1009 01:06:49.030681 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-cni-log-dir\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030747 kubelet[2975]: I1009 01:06:49.030709 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-flexvol-driver-host\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030862 kubelet[2975]: I1009 01:06:49.030738 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-lib-modules\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030862 kubelet[2975]: I1009 01:06:49.030795 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c8a65590-905f-4e66-a34f-0664b9295cc4-node-certs\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030862 kubelet[2975]: I1009 01:06:49.030827 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c8a65590-905f-4e66-a34f-0664b9295cc4-cni-bin-dir\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030926 kubelet[2975]: I1009 01:06:49.030867 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8a65590-905f-4e66-a34f-0664b9295cc4-tigera-ca-bundle\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.030926 kubelet[2975]: I1009 01:06:49.030907 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zs9b\" (UniqueName: \"kubernetes.io/projected/c8a65590-905f-4e66-a34f-0664b9295cc4-kube-api-access-5zs9b\") pod \"calico-node-ksljv\" (UID: \"c8a65590-905f-4e66-a34f-0664b9295cc4\") " pod="calico-system/calico-node-ksljv" Oct 9 01:06:49.076550 kubelet[2975]: I1009 01:06:49.073765 2975 topology_manager.go:215] "Topology Admit Handler" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" podNamespace="calico-system" podName="csi-node-driver-kqtdh" Oct 9 01:06:49.076997 kubelet[2975]: E1009 01:06:49.076939 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqtdh" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" Oct 9 01:06:49.132239 kubelet[2975]: I1009 01:06:49.132196 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8316d30b-f132-4ca2-a04b-4276c8d6a2b0-registration-dir\") pod \"csi-node-driver-kqtdh\" (UID: \"8316d30b-f132-4ca2-a04b-4276c8d6a2b0\") " pod="calico-system/csi-node-driver-kqtdh" Oct 9 01:06:49.132239 kubelet[2975]: I1009 01:06:49.132250 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxfdn\" (UniqueName: \"kubernetes.io/projected/8316d30b-f132-4ca2-a04b-4276c8d6a2b0-kube-api-access-fxfdn\") pod \"csi-node-driver-kqtdh\" (UID: \"8316d30b-f132-4ca2-a04b-4276c8d6a2b0\") " pod="calico-system/csi-node-driver-kqtdh" Oct 9 01:06:49.132397 kubelet[2975]: I1009 01:06:49.132285 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8316d30b-f132-4ca2-a04b-4276c8d6a2b0-socket-dir\") pod \"csi-node-driver-kqtdh\" (UID: \"8316d30b-f132-4ca2-a04b-4276c8d6a2b0\") " pod="calico-system/csi-node-driver-kqtdh" Oct 9 01:06:49.132397 kubelet[2975]: I1009 01:06:49.132330 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8316d30b-f132-4ca2-a04b-4276c8d6a2b0-varrun\") pod \"csi-node-driver-kqtdh\" (UID: \"8316d30b-f132-4ca2-a04b-4276c8d6a2b0\") " pod="calico-system/csi-node-driver-kqtdh" Oct 9 01:06:49.132397 kubelet[2975]: I1009 01:06:49.132388 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8316d30b-f132-4ca2-a04b-4276c8d6a2b0-kubelet-dir\") pod \"csi-node-driver-kqtdh\" (UID: \"8316d30b-f132-4ca2-a04b-4276c8d6a2b0\") " pod="calico-system/csi-node-driver-kqtdh" Oct 9 01:06:49.157843 kubelet[2975]: E1009 01:06:49.152588 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.157843 kubelet[2975]: W1009 01:06:49.152609 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.157843 kubelet[2975]: E1009 01:06:49.152635 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.163152 kubelet[2975]: E1009 01:06:49.163132 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.163320 kubelet[2975]: W1009 01:06:49.163306 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.163381 kubelet[2975]: E1009 01:06:49.163370 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.227008 containerd[1635]: time="2024-10-09T01:06:49.226924940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f68cdd44d-gcs57,Uid:4f82bb2d-4147-44c4-8b18-34ccc0181473,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:49.238242 kubelet[2975]: E1009 01:06:49.238216 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.238242 kubelet[2975]: W1009 01:06:49.238237 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.238393 kubelet[2975]: E1009 01:06:49.238258 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.239644 kubelet[2975]: E1009 01:06:49.239619 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.239644 kubelet[2975]: W1009 01:06:49.239637 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.240324 kubelet[2975]: E1009 01:06:49.239832 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.240888 kubelet[2975]: E1009 01:06:49.240862 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.240933 kubelet[2975]: W1009 01:06:49.240883 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.241741 kubelet[2975]: E1009 01:06:49.241116 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.242256 kubelet[2975]: E1009 01:06:49.242055 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.242256 kubelet[2975]: W1009 01:06:49.242070 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.242696 kubelet[2975]: E1009 01:06:49.242371 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.243356 kubelet[2975]: E1009 01:06:49.242869 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.243356 kubelet[2975]: W1009 01:06:49.242884 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.243356 kubelet[2975]: E1009 01:06:49.243285 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.244091 kubelet[2975]: E1009 01:06:49.244022 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.244502 kubelet[2975]: W1009 01:06:49.244223 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.244502 kubelet[2975]: E1009 01:06:49.244486 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.246572 kubelet[2975]: E1009 01:06:49.246165 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.246572 kubelet[2975]: W1009 01:06:49.246188 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.247684 kubelet[2975]: E1009 01:06:49.247576 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.248132 kubelet[2975]: E1009 01:06:49.247750 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.248132 kubelet[2975]: W1009 01:06:49.247762 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.248132 kubelet[2975]: E1009 01:06:49.247914 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.248132 kubelet[2975]: E1009 01:06:49.248088 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.248132 kubelet[2975]: W1009 01:06:49.248096 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.248617 kubelet[2975]: E1009 01:06:49.248530 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.249134 kubelet[2975]: E1009 01:06:49.249100 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.249134 kubelet[2975]: W1009 01:06:49.249114 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.249795 kubelet[2975]: E1009 01:06:49.249726 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.260896 containerd[1635]: time="2024-10-09T01:06:49.259043652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ksljv,Uid:c8a65590-905f-4e66-a34f-0664b9295cc4,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:49.264358 kubelet[2975]: E1009 01:06:49.264287 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.264484 kubelet[2975]: W1009 01:06:49.264453 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.264693 kubelet[2975]: E1009 01:06:49.264681 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.265242 kubelet[2975]: E1009 01:06:49.265119 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.265242 kubelet[2975]: W1009 01:06:49.265198 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.265530 kubelet[2975]: E1009 01:06:49.265451 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.265945 kubelet[2975]: E1009 01:06:49.265868 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.265945 kubelet[2975]: W1009 01:06:49.265930 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.266377 kubelet[2975]: E1009 01:06:49.266166 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.266828 kubelet[2975]: E1009 01:06:49.266733 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.266828 kubelet[2975]: W1009 01:06:49.266745 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.266828 kubelet[2975]: E1009 01:06:49.266813 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.267429 kubelet[2975]: E1009 01:06:49.267394 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.267608 kubelet[2975]: W1009 01:06:49.267518 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.267725 kubelet[2975]: E1009 01:06:49.267689 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.269179 kubelet[2975]: E1009 01:06:49.269059 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.269179 kubelet[2975]: W1009 01:06:49.269075 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.270116 kubelet[2975]: E1009 01:06:49.269384 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.270116 kubelet[2975]: E1009 01:06:49.269646 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.270116 kubelet[2975]: W1009 01:06:49.269658 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.270116 kubelet[2975]: E1009 01:06:49.269719 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.270116 kubelet[2975]: E1009 01:06:49.269888 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.270116 kubelet[2975]: W1009 01:06:49.269907 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.270116 kubelet[2975]: E1009 01:06:49.270061 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.270478 kubelet[2975]: E1009 01:06:49.270317 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.270478 kubelet[2975]: W1009 01:06:49.270329 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.274198 kubelet[2975]: E1009 01:06:49.274131 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.274198 kubelet[2975]: W1009 01:06:49.274170 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.274198 kubelet[2975]: E1009 01:06:49.274186 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.276349 kubelet[2975]: E1009 01:06:49.276022 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.276349 kubelet[2975]: W1009 01:06:49.276035 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.276349 kubelet[2975]: E1009 01:06:49.276047 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.278374 containerd[1635]: time="2024-10-09T01:06:49.276693885Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:49.278374 containerd[1635]: time="2024-10-09T01:06:49.276831449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:49.278374 containerd[1635]: time="2024-10-09T01:06:49.276842639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:49.278374 containerd[1635]: time="2024-10-09T01:06:49.277463435Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:49.281846 kubelet[2975]: E1009 01:06:49.281034 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.281846 kubelet[2975]: W1009 01:06:49.281047 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.281846 kubelet[2975]: E1009 01:06:49.281061 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.281846 kubelet[2975]: E1009 01:06:49.281316 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.281846 kubelet[2975]: W1009 01:06:49.281324 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.281846 kubelet[2975]: E1009 01:06:49.281335 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.281846 kubelet[2975]: E1009 01:06:49.281381 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.281846 kubelet[2975]: E1009 01:06:49.281655 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.281846 kubelet[2975]: W1009 01:06:49.281663 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.281846 kubelet[2975]: E1009 01:06:49.281673 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.288860 kubelet[2975]: E1009 01:06:49.288836 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.288860 kubelet[2975]: W1009 01:06:49.288853 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.288935 kubelet[2975]: E1009 01:06:49.288866 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.312724 kubelet[2975]: E1009 01:06:49.311577 2975 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 9 01:06:49.312724 kubelet[2975]: W1009 01:06:49.311596 2975 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 9 01:06:49.312724 kubelet[2975]: E1009 01:06:49.311629 2975 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 9 01:06:49.327638 containerd[1635]: time="2024-10-09T01:06:49.322407675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:06:49.327638 containerd[1635]: time="2024-10-09T01:06:49.326640476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:06:49.327638 containerd[1635]: time="2024-10-09T01:06:49.326765015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:49.329942 containerd[1635]: time="2024-10-09T01:06:49.328728960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:06:49.448012 containerd[1635]: time="2024-10-09T01:06:49.445305069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ksljv,Uid:c8a65590-905f-4e66-a34f-0664b9295cc4,Namespace:calico-system,Attempt:0,} returns sandbox id \"690da6c50f3e9e0b65b4822edfc706455d199985efffea038c88339f5f5973f1\"" Oct 9 01:06:49.460681 containerd[1635]: time="2024-10-09T01:06:49.460491875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 9 01:06:49.472387 containerd[1635]: time="2024-10-09T01:06:49.472080962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f68cdd44d-gcs57,Uid:4f82bb2d-4147-44c4-8b18-34ccc0181473,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b14934eb4fbc09c4396863d0e67bbb9accc542b20d778d156654dc2ebb9c2d4\"" Oct 9 01:06:50.913078 kubelet[2975]: E1009 01:06:50.912308 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqtdh" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" Oct 9 01:06:51.021315 containerd[1635]: time="2024-10-09T01:06:51.021262425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:51.022537 containerd[1635]: time="2024-10-09T01:06:51.022433542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=5141007" Oct 9 01:06:51.023534 containerd[1635]: time="2024-10-09T01:06:51.023281612Z" level=info msg="ImageCreate event name:\"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:51.025886 containerd[1635]: time="2024-10-09T01:06:51.025860413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:51.026843 containerd[1635]: time="2024-10-09T01:06:51.026808057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6633368\" in 1.566286004s" Oct 9 01:06:51.027012 containerd[1635]: time="2024-10-09T01:06:51.026962422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:00564b1c843430f804fda219f98769c25b538adebc11504477d5ee331fd8f85b\"" Oct 9 01:06:51.029383 containerd[1635]: time="2024-10-09T01:06:51.029341985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 9 01:06:51.030847 containerd[1635]: time="2024-10-09T01:06:51.030715817Z" level=info msg="CreateContainer within sandbox \"690da6c50f3e9e0b65b4822edfc706455d199985efffea038c88339f5f5973f1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 9 01:06:51.046025 containerd[1635]: time="2024-10-09T01:06:51.045940638Z" level=info msg="CreateContainer within sandbox \"690da6c50f3e9e0b65b4822edfc706455d199985efffea038c88339f5f5973f1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4215d0f2f558a9710fd87484db9786bec53373ef7b190f578dd74169da748044\"" Oct 9 01:06:51.047017 containerd[1635]: time="2024-10-09T01:06:51.046564643Z" level=info msg="StartContainer for \"4215d0f2f558a9710fd87484db9786bec53373ef7b190f578dd74169da748044\"" Oct 9 01:06:51.117785 containerd[1635]: time="2024-10-09T01:06:51.117748067Z" level=info msg="StartContainer for \"4215d0f2f558a9710fd87484db9786bec53373ef7b190f578dd74169da748044\" returns successfully" Oct 9 01:06:51.164799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4215d0f2f558a9710fd87484db9786bec53373ef7b190f578dd74169da748044-rootfs.mount: Deactivated successfully. Oct 9 01:06:51.206409 containerd[1635]: time="2024-10-09T01:06:51.177347577Z" level=info msg="shim disconnected" id=4215d0f2f558a9710fd87484db9786bec53373ef7b190f578dd74169da748044 namespace=k8s.io Oct 9 01:06:51.206409 containerd[1635]: time="2024-10-09T01:06:51.206407127Z" level=warning msg="cleaning up after shim disconnected" id=4215d0f2f558a9710fd87484db9786bec53373ef7b190f578dd74169da748044 namespace=k8s.io Oct 9 01:06:51.206651 containerd[1635]: time="2024-10-09T01:06:51.206420843Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:06:52.914037 kubelet[2975]: E1009 01:06:52.913731 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqtdh" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" Oct 9 01:06:53.483577 containerd[1635]: time="2024-10-09T01:06:53.483525764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:53.501710 containerd[1635]: time="2024-10-09T01:06:53.501454743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=29471335" Oct 9 01:06:53.501710 containerd[1635]: time="2024-10-09T01:06:53.501578041Z" level=info msg="ImageCreate event name:\"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:53.509855 containerd[1635]: time="2024-10-09T01:06:53.509773464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:53.510734 containerd[1635]: time="2024-10-09T01:06:53.510227396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"30963728\" in 2.480837182s" Oct 9 01:06:53.510734 containerd[1635]: time="2024-10-09T01:06:53.510257793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:a19ab150adede78dd36481226e260735eb3b811481c6765aec79e8da6ae78b7f\"" Oct 9 01:06:53.511607 containerd[1635]: time="2024-10-09T01:06:53.511355278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 9 01:06:53.524766 containerd[1635]: time="2024-10-09T01:06:53.524726112Z" level=info msg="CreateContainer within sandbox \"3b14934eb4fbc09c4396863d0e67bbb9accc542b20d778d156654dc2ebb9c2d4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 9 01:06:53.544925 containerd[1635]: time="2024-10-09T01:06:53.544878013Z" level=info msg="CreateContainer within sandbox \"3b14934eb4fbc09c4396863d0e67bbb9accc542b20d778d156654dc2ebb9c2d4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"af4f1a70b3781a35aabcbd3cf81be8c3b80087d50191b167e5f502237abce888\"" Oct 9 01:06:53.545940 containerd[1635]: time="2024-10-09T01:06:53.545480871Z" level=info msg="StartContainer for \"af4f1a70b3781a35aabcbd3cf81be8c3b80087d50191b167e5f502237abce888\"" Oct 9 01:06:53.635210 containerd[1635]: time="2024-10-09T01:06:53.635154195Z" level=info msg="StartContainer for \"af4f1a70b3781a35aabcbd3cf81be8c3b80087d50191b167e5f502237abce888\" returns successfully" Oct 9 01:06:54.040633 kubelet[2975]: I1009 01:06:54.039877 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-f68cdd44d-gcs57" podStartSLOduration=2.002428549 podStartE2EDuration="6.0398452s" podCreationTimestamp="2024-10-09 01:06:48 +0000 UTC" firstStartedPulling="2024-10-09 01:06:49.473138915 +0000 UTC m=+20.681383521" lastFinishedPulling="2024-10-09 01:06:53.510555555 +0000 UTC m=+24.718800172" observedRunningTime="2024-10-09 01:06:54.027046731 +0000 UTC m=+25.235291368" watchObservedRunningTime="2024-10-09 01:06:54.0398452 +0000 UTC m=+25.248089807" Oct 9 01:06:54.912115 kubelet[2975]: E1009 01:06:54.911670 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqtdh" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" Oct 9 01:06:56.912685 kubelet[2975]: E1009 01:06:56.912373 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kqtdh" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" Oct 9 01:06:57.899059 containerd[1635]: time="2024-10-09T01:06:57.899027058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:57.900996 containerd[1635]: time="2024-10-09T01:06:57.900108153Z" level=info msg="ImageCreate event name:\"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:57.900996 containerd[1635]: time="2024-10-09T01:06:57.900141055Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=93083736" Oct 9 01:06:57.905670 containerd[1635]: time="2024-10-09T01:06:57.905641894Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:06:57.906380 containerd[1635]: time="2024-10-09T01:06:57.906359612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"94576137\" in 4.394978486s" Oct 9 01:06:57.906486 containerd[1635]: time="2024-10-09T01:06:57.906470468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:f6d76a1259a8c22fd1c603577ee5bb8109bc40f2b3d0536d39160a027ffe9bab\"" Oct 9 01:06:57.908311 containerd[1635]: time="2024-10-09T01:06:57.908283857Z" level=info msg="CreateContainer within sandbox \"690da6c50f3e9e0b65b4822edfc706455d199985efffea038c88339f5f5973f1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 9 01:06:57.934302 containerd[1635]: time="2024-10-09T01:06:57.934273268Z" level=info msg="CreateContainer within sandbox \"690da6c50f3e9e0b65b4822edfc706455d199985efffea038c88339f5f5973f1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ca33d018b738fab8bb2bf237c3ea0d9c75867ebc327f64e4dcd1a1a7183e2b2b\"" Oct 9 01:06:57.935715 containerd[1635]: time="2024-10-09T01:06:57.934776436Z" level=info msg="StartContainer for \"ca33d018b738fab8bb2bf237c3ea0d9c75867ebc327f64e4dcd1a1a7183e2b2b\"" Oct 9 01:06:58.029867 containerd[1635]: time="2024-10-09T01:06:58.029830209Z" level=info msg="StartContainer for \"ca33d018b738fab8bb2bf237c3ea0d9c75867ebc327f64e4dcd1a1a7183e2b2b\" returns successfully" Oct 9 01:06:58.585103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca33d018b738fab8bb2bf237c3ea0d9c75867ebc327f64e4dcd1a1a7183e2b2b-rootfs.mount: Deactivated successfully. Oct 9 01:06:58.588575 containerd[1635]: time="2024-10-09T01:06:58.588528594Z" level=info msg="shim disconnected" id=ca33d018b738fab8bb2bf237c3ea0d9c75867ebc327f64e4dcd1a1a7183e2b2b namespace=k8s.io Oct 9 01:06:58.588701 containerd[1635]: time="2024-10-09T01:06:58.588686848Z" level=warning msg="cleaning up after shim disconnected" id=ca33d018b738fab8bb2bf237c3ea0d9c75867ebc327f64e4dcd1a1a7183e2b2b namespace=k8s.io Oct 9 01:06:58.588862 containerd[1635]: time="2024-10-09T01:06:58.588741470Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 9 01:06:58.591324 kubelet[2975]: I1009 01:06:58.591309 2975 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 9 01:06:58.608004 containerd[1635]: time="2024-10-09T01:06:58.607483241Z" level=warning msg="cleanup warnings time=\"2024-10-09T01:06:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 9 01:06:58.620785 kubelet[2975]: I1009 01:06:58.620676 2975 topology_manager.go:215] "Topology Admit Handler" podUID="c783c086-b681-4894-9dd2-660195d788ef" podNamespace="kube-system" podName="coredns-76f75df574-6zdq4" Oct 9 01:06:58.628587 kubelet[2975]: I1009 01:06:58.628552 2975 topology_manager.go:215] "Topology Admit Handler" podUID="f443b1d9-bed1-4b25-880d-a05ee8cfe5b8" podNamespace="kube-system" podName="coredns-76f75df574-m4ftf" Oct 9 01:06:58.629457 kubelet[2975]: I1009 01:06:58.629075 2975 topology_manager.go:215] "Topology Admit Handler" podUID="59b2fdb2-7c87-464c-9731-460e7a5b18c0" podNamespace="calico-system" podName="calico-kube-controllers-7d677fb677-mcqqn" Oct 9 01:06:58.733729 kubelet[2975]: I1009 01:06:58.733683 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59b2fdb2-7c87-464c-9731-460e7a5b18c0-tigera-ca-bundle\") pod \"calico-kube-controllers-7d677fb677-mcqqn\" (UID: \"59b2fdb2-7c87-464c-9731-460e7a5b18c0\") " pod="calico-system/calico-kube-controllers-7d677fb677-mcqqn" Oct 9 01:06:58.733958 kubelet[2975]: I1009 01:06:58.733945 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c783c086-b681-4894-9dd2-660195d788ef-config-volume\") pod \"coredns-76f75df574-6zdq4\" (UID: \"c783c086-b681-4894-9dd2-660195d788ef\") " pod="kube-system/coredns-76f75df574-6zdq4" Oct 9 01:06:58.734525 kubelet[2975]: I1009 01:06:58.734175 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9qbf\" (UniqueName: \"kubernetes.io/projected/59b2fdb2-7c87-464c-9731-460e7a5b18c0-kube-api-access-r9qbf\") pod \"calico-kube-controllers-7d677fb677-mcqqn\" (UID: \"59b2fdb2-7c87-464c-9731-460e7a5b18c0\") " pod="calico-system/calico-kube-controllers-7d677fb677-mcqqn" Oct 9 01:06:58.734525 kubelet[2975]: I1009 01:06:58.734201 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbsnk\" (UniqueName: \"kubernetes.io/projected/c783c086-b681-4894-9dd2-660195d788ef-kube-api-access-pbsnk\") pod \"coredns-76f75df574-6zdq4\" (UID: \"c783c086-b681-4894-9dd2-660195d788ef\") " pod="kube-system/coredns-76f75df574-6zdq4" Oct 9 01:06:58.734525 kubelet[2975]: I1009 01:06:58.734224 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f443b1d9-bed1-4b25-880d-a05ee8cfe5b8-config-volume\") pod \"coredns-76f75df574-m4ftf\" (UID: \"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8\") " pod="kube-system/coredns-76f75df574-m4ftf" Oct 9 01:06:58.734525 kubelet[2975]: I1009 01:06:58.734261 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sffz\" (UniqueName: \"kubernetes.io/projected/f443b1d9-bed1-4b25-880d-a05ee8cfe5b8-kube-api-access-4sffz\") pod \"coredns-76f75df574-m4ftf\" (UID: \"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8\") " pod="kube-system/coredns-76f75df574-m4ftf" Oct 9 01:06:58.915883 containerd[1635]: time="2024-10-09T01:06:58.915726508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqtdh,Uid:8316d30b-f132-4ca2-a04b-4276c8d6a2b0,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:58.934897 containerd[1635]: time="2024-10-09T01:06:58.934706342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6zdq4,Uid:c783c086-b681-4894-9dd2-660195d788ef,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:58.937770 containerd[1635]: time="2024-10-09T01:06:58.937638943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d677fb677-mcqqn,Uid:59b2fdb2-7c87-464c-9731-460e7a5b18c0,Namespace:calico-system,Attempt:0,}" Oct 9 01:06:58.941990 containerd[1635]: time="2024-10-09T01:06:58.939444160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4ftf,Uid:f443b1d9-bed1-4b25-880d-a05ee8cfe5b8,Namespace:kube-system,Attempt:0,}" Oct 9 01:06:59.044599 containerd[1635]: time="2024-10-09T01:06:59.043570266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 9 01:06:59.187471 containerd[1635]: time="2024-10-09T01:06:59.186519695Z" level=error msg="Failed to destroy network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.192822 containerd[1635]: time="2024-10-09T01:06:59.192257919Z" level=error msg="encountered an error cleaning up failed sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.192936 containerd[1635]: time="2024-10-09T01:06:59.192913403Z" level=error msg="Failed to destroy network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.193326 containerd[1635]: time="2024-10-09T01:06:59.193297160Z" level=error msg="encountered an error cleaning up failed sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.199084 containerd[1635]: time="2024-10-09T01:06:59.198881257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqtdh,Uid:8316d30b-f132-4ca2-a04b-4276c8d6a2b0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.206858 containerd[1635]: time="2024-10-09T01:06:59.206082303Z" level=error msg="Failed to destroy network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.206858 containerd[1635]: time="2024-10-09T01:06:59.206348500Z" level=error msg="encountered an error cleaning up failed sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.206858 containerd[1635]: time="2024-10-09T01:06:59.206376092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d677fb677-mcqqn,Uid:59b2fdb2-7c87-464c-9731-460e7a5b18c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.206858 containerd[1635]: time="2024-10-09T01:06:59.206448937Z" level=error msg="Failed to destroy network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.206858 containerd[1635]: time="2024-10-09T01:06:59.206721707Z" level=error msg="encountered an error cleaning up failed sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.206858 containerd[1635]: time="2024-10-09T01:06:59.206761401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4ftf,Uid:f443b1d9-bed1-4b25-880d-a05ee8cfe5b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.207653 kubelet[2975]: E1009 01:06:59.207266 2975 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.207653 kubelet[2975]: E1009 01:06:59.207276 2975 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.207653 kubelet[2975]: E1009 01:06:59.207341 2975 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m4ftf" Oct 9 01:06:59.207653 kubelet[2975]: E1009 01:06:59.207354 2975 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.209019 kubelet[2975]: E1009 01:06:59.207366 2975 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-m4ftf" Oct 9 01:06:59.209019 kubelet[2975]: E1009 01:06:59.207392 2975 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d677fb677-mcqqn" Oct 9 01:06:59.209019 kubelet[2975]: E1009 01:06:59.207417 2975 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d677fb677-mcqqn" Oct 9 01:06:59.209139 kubelet[2975]: E1009 01:06:59.207436 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-m4ftf_kube-system(f443b1d9-bed1-4b25-880d-a05ee8cfe5b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-m4ftf_kube-system(f443b1d9-bed1-4b25-880d-a05ee8cfe5b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m4ftf" podUID="f443b1d9-bed1-4b25-880d-a05ee8cfe5b8" Oct 9 01:06:59.209139 kubelet[2975]: E1009 01:06:59.207469 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d677fb677-mcqqn_calico-system(59b2fdb2-7c87-464c-9731-460e7a5b18c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d677fb677-mcqqn_calico-system(59b2fdb2-7c87-464c-9731-460e7a5b18c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d677fb677-mcqqn" podUID="59b2fdb2-7c87-464c-9731-460e7a5b18c0" Oct 9 01:06:59.209139 kubelet[2975]: E1009 01:06:59.207510 2975 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kqtdh" Oct 9 01:06:59.209374 kubelet[2975]: E1009 01:06:59.207537 2975 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kqtdh" Oct 9 01:06:59.209374 kubelet[2975]: E1009 01:06:59.207571 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kqtdh_calico-system(8316d30b-f132-4ca2-a04b-4276c8d6a2b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kqtdh_calico-system(8316d30b-f132-4ca2-a04b-4276c8d6a2b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kqtdh" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" Oct 9 01:06:59.213014 containerd[1635]: time="2024-10-09T01:06:59.212904832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6zdq4,Uid:c783c086-b681-4894-9dd2-660195d788ef,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.213299 kubelet[2975]: E1009 01:06:59.213027 2975 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:06:59.213299 kubelet[2975]: E1009 01:06:59.213054 2975 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6zdq4" Oct 9 01:06:59.213299 kubelet[2975]: E1009 01:06:59.213071 2975 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-6zdq4" Oct 9 01:06:59.213423 kubelet[2975]: E1009 01:06:59.213102 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-6zdq4_kube-system(c783c086-b681-4894-9dd2-660195d788ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-6zdq4_kube-system(c783c086-b681-4894-9dd2-660195d788ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6zdq4" podUID="c783c086-b681-4894-9dd2-660195d788ef" Oct 9 01:06:59.930764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa-shm.mount: Deactivated successfully. Oct 9 01:06:59.931335 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25-shm.mount: Deactivated successfully. Oct 9 01:06:59.931502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50-shm.mount: Deactivated successfully. Oct 9 01:07:00.036485 kubelet[2975]: I1009 01:07:00.036425 2975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:00.037554 containerd[1635]: time="2024-10-09T01:07:00.037206065Z" level=info msg="StopPodSandbox for \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\"" Oct 9 01:07:00.040200 kubelet[2975]: I1009 01:07:00.040183 2975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:00.041805 containerd[1635]: time="2024-10-09T01:07:00.040893097Z" level=info msg="StopPodSandbox for \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\"" Oct 9 01:07:00.044741 containerd[1635]: time="2024-10-09T01:07:00.044708869Z" level=info msg="Ensure that sandbox 823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c in task-service has been cleanup successfully" Oct 9 01:07:00.045962 containerd[1635]: time="2024-10-09T01:07:00.045897230Z" level=info msg="Ensure that sandbox 67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25 in task-service has been cleanup successfully" Oct 9 01:07:00.048995 kubelet[2975]: I1009 01:07:00.048801 2975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:00.050548 containerd[1635]: time="2024-10-09T01:07:00.050519721Z" level=info msg="StopPodSandbox for \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\"" Oct 9 01:07:00.051300 containerd[1635]: time="2024-10-09T01:07:00.051198809Z" level=info msg="Ensure that sandbox 3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50 in task-service has been cleanup successfully" Oct 9 01:07:00.054725 kubelet[2975]: I1009 01:07:00.054669 2975 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:00.055562 containerd[1635]: time="2024-10-09T01:07:00.055087478Z" level=info msg="StopPodSandbox for \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\"" Oct 9 01:07:00.056114 containerd[1635]: time="2024-10-09T01:07:00.056093739Z" level=info msg="Ensure that sandbox aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa in task-service has been cleanup successfully" Oct 9 01:07:00.103238 containerd[1635]: time="2024-10-09T01:07:00.103114847Z" level=error msg="StopPodSandbox for \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\" failed" error="failed to destroy network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:00.103622 kubelet[2975]: E1009 01:07:00.103474 2975 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:00.103622 kubelet[2975]: E1009 01:07:00.103536 2975 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c"} Oct 9 01:07:00.103622 kubelet[2975]: E1009 01:07:00.103566 2975 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:00.103622 kubelet[2975]: E1009 01:07:00.103594 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-m4ftf" podUID="f443b1d9-bed1-4b25-880d-a05ee8cfe5b8" Oct 9 01:07:00.104100 containerd[1635]: time="2024-10-09T01:07:00.104056237Z" level=error msg="StopPodSandbox for \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\" failed" error="failed to destroy network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:00.104413 kubelet[2975]: E1009 01:07:00.104319 2975 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:00.104413 kubelet[2975]: E1009 01:07:00.104342 2975 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25"} Oct 9 01:07:00.104413 kubelet[2975]: E1009 01:07:00.104378 2975 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c783c086-b681-4894-9dd2-660195d788ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:00.104413 kubelet[2975]: E1009 01:07:00.104398 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c783c086-b681-4894-9dd2-660195d788ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-6zdq4" podUID="c783c086-b681-4894-9dd2-660195d788ef" Oct 9 01:07:00.108531 containerd[1635]: time="2024-10-09T01:07:00.108475626Z" level=error msg="StopPodSandbox for \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\" failed" error="failed to destroy network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:00.108777 kubelet[2975]: E1009 01:07:00.108695 2975 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:00.108777 kubelet[2975]: E1009 01:07:00.108714 2975 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50"} Oct 9 01:07:00.108777 kubelet[2975]: E1009 01:07:00.108740 2975 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8316d30b-f132-4ca2-a04b-4276c8d6a2b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:00.108777 kubelet[2975]: E1009 01:07:00.108763 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8316d30b-f132-4ca2-a04b-4276c8d6a2b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kqtdh" podUID="8316d30b-f132-4ca2-a04b-4276c8d6a2b0" Oct 9 01:07:00.111201 containerd[1635]: time="2024-10-09T01:07:00.111171125Z" level=error msg="StopPodSandbox for \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\" failed" error="failed to destroy network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 9 01:07:00.111416 kubelet[2975]: E1009 01:07:00.111336 2975 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:00.111416 kubelet[2975]: E1009 01:07:00.111356 2975 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa"} Oct 9 01:07:00.111416 kubelet[2975]: E1009 01:07:00.111382 2975 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59b2fdb2-7c87-464c-9731-460e7a5b18c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 9 01:07:00.111416 kubelet[2975]: E1009 01:07:00.111403 2975 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59b2fdb2-7c87-464c-9731-460e7a5b18c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d677fb677-mcqqn" podUID="59b2fdb2-7c87-464c-9731-460e7a5b18c0" Oct 9 01:07:04.865533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1101552094.mount: Deactivated successfully. Oct 9 01:07:04.896168 containerd[1635]: time="2024-10-09T01:07:04.896094349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:04.896876 containerd[1635]: time="2024-10-09T01:07:04.896746371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=117873564" Oct 9 01:07:04.898716 containerd[1635]: time="2024-10-09T01:07:04.898675430Z" level=info msg="ImageCreate event name:\"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:04.909254 containerd[1635]: time="2024-10-09T01:07:04.909194613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:04.909620 containerd[1635]: time="2024-10-09T01:07:04.909592540Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"117873426\" in 5.865968021s" Oct 9 01:07:04.910010 containerd[1635]: time="2024-10-09T01:07:04.909620011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:8bbeb9e1ee3287b8f750c10383f53fa1ec6f942aaea2a900f666d5e4e63cf4cc\"" Oct 9 01:07:04.988722 containerd[1635]: time="2024-10-09T01:07:04.988679487Z" level=info msg="CreateContainer within sandbox \"690da6c50f3e9e0b65b4822edfc706455d199985efffea038c88339f5f5973f1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 9 01:07:05.013136 containerd[1635]: time="2024-10-09T01:07:05.013104295Z" level=info msg="CreateContainer within sandbox \"690da6c50f3e9e0b65b4822edfc706455d199985efffea038c88339f5f5973f1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b69179aae27e235744cf144d7a7406bd37361716d277d0c729c491bdb162ccaf\"" Oct 9 01:07:05.014233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416235707.mount: Deactivated successfully. Oct 9 01:07:05.015090 containerd[1635]: time="2024-10-09T01:07:05.015053744Z" level=info msg="StartContainer for \"b69179aae27e235744cf144d7a7406bd37361716d277d0c729c491bdb162ccaf\"" Oct 9 01:07:05.112230 containerd[1635]: time="2024-10-09T01:07:05.112156279Z" level=info msg="StartContainer for \"b69179aae27e235744cf144d7a7406bd37361716d277d0c729c491bdb162ccaf\" returns successfully" Oct 9 01:07:05.209938 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 9 01:07:05.214332 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 9 01:07:06.120364 kubelet[2975]: I1009 01:07:06.119944 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-ksljv" podStartSLOduration=2.648688128 podStartE2EDuration="18.100866709s" podCreationTimestamp="2024-10-09 01:06:48 +0000 UTC" firstStartedPulling="2024-10-09 01:06:49.457636656 +0000 UTC m=+20.665881263" lastFinishedPulling="2024-10-09 01:07:04.909815237 +0000 UTC m=+36.118059844" observedRunningTime="2024-10-09 01:07:06.097017336 +0000 UTC m=+37.305261974" watchObservedRunningTime="2024-10-09 01:07:06.100866709 +0000 UTC m=+37.309111346" Oct 9 01:07:06.637256 systemd-journald[1184]: Under memory pressure, flushing caches. Oct 9 01:07:06.636059 systemd-resolved[1512]: Under memory pressure, flushing caches. Oct 9 01:07:06.636126 systemd-resolved[1512]: Flushed all caches. Oct 9 01:07:06.844759 kernel: bpftool[4056]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 9 01:07:07.077066 systemd-networkd[1262]: vxlan.calico: Link UP Oct 9 01:07:07.077072 systemd-networkd[1262]: vxlan.calico: Gained carrier Oct 9 01:07:08.235165 systemd-networkd[1262]: vxlan.calico: Gained IPv6LL Oct 9 01:07:08.683333 systemd-resolved[1512]: Under memory pressure, flushing caches. Oct 9 01:07:08.685182 systemd-journald[1184]: Under memory pressure, flushing caches. Oct 9 01:07:08.683341 systemd-resolved[1512]: Flushed all caches. Oct 9 01:07:11.912870 containerd[1635]: time="2024-10-09T01:07:11.912791771Z" level=info msg="StopPodSandbox for \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\"" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:11.966 [INFO][4183] k8s.go 608: Cleaning up netns ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:11.968 [INFO][4183] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" iface="eth0" netns="/var/run/netns/cni-5705cd12-364e-d107-6594-b70e3c350e87" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:11.968 [INFO][4183] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" iface="eth0" netns="/var/run/netns/cni-5705cd12-364e-d107-6594-b70e3c350e87" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:11.969 [INFO][4183] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" iface="eth0" netns="/var/run/netns/cni-5705cd12-364e-d107-6594-b70e3c350e87" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:11.969 [INFO][4183] k8s.go 615: Releasing IP address(es) ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:11.969 [INFO][4183] utils.go 188: Calico CNI releasing IP address ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:12.150 [INFO][4189] ipam_plugin.go 417: Releasing address using handleID ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:12.151 [INFO][4189] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:12.152 [INFO][4189] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:12.162 [WARNING][4189] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:12.162 [INFO][4189] ipam_plugin.go 445: Releasing address using workloadID ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:12.164 [INFO][4189] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:12.170858 containerd[1635]: 2024-10-09 01:07:12.166 [INFO][4183] k8s.go 621: Teardown processing complete. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:12.170858 containerd[1635]: time="2024-10-09T01:07:12.169689811Z" level=info msg="TearDown network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\" successfully" Oct 9 01:07:12.170858 containerd[1635]: time="2024-10-09T01:07:12.169715191Z" level=info msg="StopPodSandbox for \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\" returns successfully" Oct 9 01:07:12.173784 containerd[1635]: time="2024-10-09T01:07:12.173367462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d677fb677-mcqqn,Uid:59b2fdb2-7c87-464c-9731-460e7a5b18c0,Namespace:calico-system,Attempt:1,}" Oct 9 01:07:12.176809 systemd[1]: run-netns-cni\x2d5705cd12\x2d364e\x2dd107\x2d6594\x2db70e3c350e87.mount: Deactivated successfully. Oct 9 01:07:12.325869 systemd-networkd[1262]: cali8a5dbc77d26: Link UP Oct 9 01:07:12.327076 systemd-networkd[1262]: cali8a5dbc77d26: Gained carrier Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.256 [INFO][4195] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0 calico-kube-controllers-7d677fb677- calico-system 59b2fdb2-7c87-464c-9731-460e7a5b18c0 673 0 2024-10-09 01:06:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d677fb677 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4116-0-0-f-4ef11beaf3 calico-kube-controllers-7d677fb677-mcqqn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8a5dbc77d26 [] []}} ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.256 [INFO][4195] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.283 [INFO][4206] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" HandleID="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.292 [INFO][4206] ipam_plugin.go 270: Auto assigning IP ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" HandleID="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000344410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-f-4ef11beaf3", "pod":"calico-kube-controllers-7d677fb677-mcqqn", "timestamp":"2024-10-09 01:07:12.28389111 +0000 UTC"}, Hostname:"ci-4116-0-0-f-4ef11beaf3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.292 [INFO][4206] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.292 [INFO][4206] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.292 [INFO][4206] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-f-4ef11beaf3' Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.294 [INFO][4206] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.300 [INFO][4206] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.304 [INFO][4206] ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.305 [INFO][4206] ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.307 [INFO][4206] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.307 [INFO][4206] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.308 [INFO][4206] ipam.go 1685: Creating new handle: k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.313 [INFO][4206] ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.319 [INFO][4206] ipam.go 1216: Successfully claimed IPs: [192.168.75.1/26] block=192.168.75.0/26 handle="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.319 [INFO][4206] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.1/26] handle="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.319 [INFO][4206] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:12.344842 containerd[1635]: 2024-10-09 01:07:12.319 [INFO][4206] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.75.1/26] IPv6=[] ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" HandleID="k8s-pod-network.8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.349415 containerd[1635]: 2024-10-09 01:07:12.323 [INFO][4195] k8s.go 386: Populated endpoint ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0", GenerateName:"calico-kube-controllers-7d677fb677-", Namespace:"calico-system", SelfLink:"", UID:"59b2fdb2-7c87-464c-9731-460e7a5b18c0", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d677fb677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"", Pod:"calico-kube-controllers-7d677fb677-mcqqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a5dbc77d26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:12.349415 containerd[1635]: 2024-10-09 01:07:12.323 [INFO][4195] k8s.go 387: Calico CNI using IPs: [192.168.75.1/32] ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.349415 containerd[1635]: 2024-10-09 01:07:12.323 [INFO][4195] dataplane_linux.go 68: Setting the host side veth name to cali8a5dbc77d26 ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.349415 containerd[1635]: 2024-10-09 01:07:12.328 [INFO][4195] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.349415 containerd[1635]: 2024-10-09 01:07:12.328 [INFO][4195] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0", GenerateName:"calico-kube-controllers-7d677fb677-", Namespace:"calico-system", SelfLink:"", UID:"59b2fdb2-7c87-464c-9731-460e7a5b18c0", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d677fb677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a", Pod:"calico-kube-controllers-7d677fb677-mcqqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a5dbc77d26", MAC:"aa:91:46:62:78:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:12.349415 containerd[1635]: 2024-10-09 01:07:12.339 [INFO][4195] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a" Namespace="calico-system" Pod="calico-kube-controllers-7d677fb677-mcqqn" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:12.377414 containerd[1635]: time="2024-10-09T01:07:12.377066211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:12.377414 containerd[1635]: time="2024-10-09T01:07:12.377127606Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:12.377414 containerd[1635]: time="2024-10-09T01:07:12.377144918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:12.378800 containerd[1635]: time="2024-10-09T01:07:12.378744047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:12.449661 containerd[1635]: time="2024-10-09T01:07:12.449430849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d677fb677-mcqqn,Uid:59b2fdb2-7c87-464c-9731-460e7a5b18c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a\"" Oct 9 01:07:12.451480 containerd[1635]: time="2024-10-09T01:07:12.451377132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 9 01:07:12.915961 containerd[1635]: time="2024-10-09T01:07:12.915447876Z" level=info msg="StopPodSandbox for \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\"" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.000 [INFO][4289] k8s.go 608: Cleaning up netns ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.000 [INFO][4289] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" iface="eth0" netns="/var/run/netns/cni-a062f3d8-ff3e-3272-9f2e-0a59a928d83e" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.000 [INFO][4289] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" iface="eth0" netns="/var/run/netns/cni-a062f3d8-ff3e-3272-9f2e-0a59a928d83e" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.000 [INFO][4289] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" iface="eth0" netns="/var/run/netns/cni-a062f3d8-ff3e-3272-9f2e-0a59a928d83e" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.000 [INFO][4289] k8s.go 615: Releasing IP address(es) ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.000 [INFO][4289] utils.go 188: Calico CNI releasing IP address ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.021 [INFO][4296] ipam_plugin.go 417: Releasing address using handleID ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.021 [INFO][4296] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.021 [INFO][4296] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.025 [WARNING][4296] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.025 [INFO][4296] ipam_plugin.go 445: Releasing address using workloadID ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.026 [INFO][4296] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:13.030968 containerd[1635]: 2024-10-09 01:07:13.028 [INFO][4289] k8s.go 621: Teardown processing complete. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:13.031363 containerd[1635]: time="2024-10-09T01:07:13.031225185Z" level=info msg="TearDown network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\" successfully" Oct 9 01:07:13.031363 containerd[1635]: time="2024-10-09T01:07:13.031267616Z" level=info msg="StopPodSandbox for \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\" returns successfully" Oct 9 01:07:13.031931 containerd[1635]: time="2024-10-09T01:07:13.031902925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4ftf,Uid:f443b1d9-bed1-4b25-880d-a05ee8cfe5b8,Namespace:kube-system,Attempt:1,}" Oct 9 01:07:13.128089 systemd-networkd[1262]: cali88bcd0c5e2e: Link UP Oct 9 01:07:13.128271 systemd-networkd[1262]: cali88bcd0c5e2e: Gained carrier Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.066 [INFO][4303] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0 coredns-76f75df574- kube-system f443b1d9-bed1-4b25-880d-a05ee8cfe5b8 681 0 2024-10-09 01:06:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-f-4ef11beaf3 coredns-76f75df574-m4ftf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali88bcd0c5e2e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.066 [INFO][4303] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.090 [INFO][4313] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" HandleID="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.099 [INFO][4313] ipam_plugin.go 270: Auto assigning IP ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" HandleID="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edca0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-f-4ef11beaf3", "pod":"coredns-76f75df574-m4ftf", "timestamp":"2024-10-09 01:07:13.090485207 +0000 UTC"}, Hostname:"ci-4116-0-0-f-4ef11beaf3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.100 [INFO][4313] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.100 [INFO][4313] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.100 [INFO][4313] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-f-4ef11beaf3' Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.101 [INFO][4313] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.105 [INFO][4313] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.108 [INFO][4313] ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.109 [INFO][4313] ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.111 [INFO][4313] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.111 [INFO][4313] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.113 [INFO][4313] ipam.go 1685: Creating new handle: k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928 Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.117 [INFO][4313] ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.121 [INFO][4313] ipam.go 1216: Successfully claimed IPs: [192.168.75.2/26] block=192.168.75.0/26 handle="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.121 [INFO][4313] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.2/26] handle="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.121 [INFO][4313] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:13.144226 containerd[1635]: 2024-10-09 01:07:13.121 [INFO][4313] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.75.2/26] IPv6=[] ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" HandleID="k8s-pod-network.9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.145749 containerd[1635]: 2024-10-09 01:07:13.123 [INFO][4303] k8s.go 386: Populated endpoint ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"", Pod:"coredns-76f75df574-m4ftf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88bcd0c5e2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:13.145749 containerd[1635]: 2024-10-09 01:07:13.124 [INFO][4303] k8s.go 387: Calico CNI using IPs: [192.168.75.2/32] ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.145749 containerd[1635]: 2024-10-09 01:07:13.124 [INFO][4303] dataplane_linux.go 68: Setting the host side veth name to cali88bcd0c5e2e ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.145749 containerd[1635]: 2024-10-09 01:07:13.125 [INFO][4303] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.145749 containerd[1635]: 2024-10-09 01:07:13.126 [INFO][4303] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8", ResourceVersion:"681", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928", Pod:"coredns-76f75df574-m4ftf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88bcd0c5e2e", MAC:"f6:a1:e0:24:f5:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:13.145749 containerd[1635]: 2024-10-09 01:07:13.141 [INFO][4303] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928" Namespace="kube-system" Pod="coredns-76f75df574-m4ftf" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:13.165813 containerd[1635]: time="2024-10-09T01:07:13.165289520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:13.166126 containerd[1635]: time="2024-10-09T01:07:13.165795165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:13.166126 containerd[1635]: time="2024-10-09T01:07:13.165953153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:13.166270 containerd[1635]: time="2024-10-09T01:07:13.166065215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:13.176525 systemd[1]: run-netns-cni\x2da062f3d8\x2dff3e\x2d3272\x2d9f2e\x2d0a59a928d83e.mount: Deactivated successfully. Oct 9 01:07:13.221260 containerd[1635]: time="2024-10-09T01:07:13.221178365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-m4ftf,Uid:f443b1d9-bed1-4b25-880d-a05ee8cfe5b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928\"" Oct 9 01:07:13.225109 containerd[1635]: time="2024-10-09T01:07:13.225080684Z" level=info msg="CreateContainer within sandbox \"9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:07:13.243185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820013649.mount: Deactivated successfully. Oct 9 01:07:13.244428 containerd[1635]: time="2024-10-09T01:07:13.244391127Z" level=info msg="CreateContainer within sandbox \"9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42db22e90ed5bc5deb56da5f6a91f75fc12d9c811af67bf0a017f0d7a4211527\"" Oct 9 01:07:13.244897 containerd[1635]: time="2024-10-09T01:07:13.244872938Z" level=info msg="StartContainer for \"42db22e90ed5bc5deb56da5f6a91f75fc12d9c811af67bf0a017f0d7a4211527\"" Oct 9 01:07:13.306615 containerd[1635]: time="2024-10-09T01:07:13.306527991Z" level=info msg="StartContainer for \"42db22e90ed5bc5deb56da5f6a91f75fc12d9c811af67bf0a017f0d7a4211527\" returns successfully" Oct 9 01:07:13.419245 systemd-networkd[1262]: cali8a5dbc77d26: Gained IPv6LL Oct 9 01:07:14.161251 kubelet[2975]: I1009 01:07:14.161214 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-m4ftf" podStartSLOduration=31.161178379 podStartE2EDuration="31.161178379s" podCreationTimestamp="2024-10-09 01:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:14.144732555 +0000 UTC m=+45.352977172" watchObservedRunningTime="2024-10-09 01:07:14.161178379 +0000 UTC m=+45.369422985" Oct 9 01:07:14.507138 systemd-networkd[1262]: cali88bcd0c5e2e: Gained IPv6LL Oct 9 01:07:14.913963 containerd[1635]: time="2024-10-09T01:07:14.913719998Z" level=info msg="StopPodSandbox for \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\"" Oct 9 01:07:14.915094 containerd[1635]: time="2024-10-09T01:07:14.914726660Z" level=info msg="StopPodSandbox for \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\"" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:14.969 [INFO][4451] k8s.go 608: Cleaning up netns ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:14.969 [INFO][4451] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" iface="eth0" netns="/var/run/netns/cni-cfa6e89e-098c-a5c8-f92e-0a9aed6a6b60" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:14.971 [INFO][4451] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" iface="eth0" netns="/var/run/netns/cni-cfa6e89e-098c-a5c8-f92e-0a9aed6a6b60" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:14.971 [INFO][4451] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" iface="eth0" netns="/var/run/netns/cni-cfa6e89e-098c-a5c8-f92e-0a9aed6a6b60" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:14.971 [INFO][4451] k8s.go 615: Releasing IP address(es) ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:14.971 [INFO][4451] utils.go 188: Calico CNI releasing IP address ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:15.018 [INFO][4464] ipam_plugin.go 417: Releasing address using handleID ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:15.019 [INFO][4464] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:15.019 [INFO][4464] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:15.024 [WARNING][4464] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:15.024 [INFO][4464] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:15.026 [INFO][4464] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:15.035147 containerd[1635]: 2024-10-09 01:07:15.031 [INFO][4451] k8s.go 621: Teardown processing complete. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:15.037394 containerd[1635]: time="2024-10-09T01:07:15.037369813Z" level=info msg="TearDown network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\" successfully" Oct 9 01:07:15.037465 containerd[1635]: time="2024-10-09T01:07:15.037452179Z" level=info msg="StopPodSandbox for \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\" returns successfully" Oct 9 01:07:15.040806 containerd[1635]: time="2024-10-09T01:07:15.039553091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqtdh,Uid:8316d30b-f132-4ca2-a04b-4276c8d6a2b0,Namespace:calico-system,Attempt:1,}" Oct 9 01:07:15.043248 systemd[1]: run-netns-cni\x2dcfa6e89e\x2d098c\x2da5c8\x2df92e\x2d0a9aed6a6b60.mount: Deactivated successfully. Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:14.991 [INFO][4455] k8s.go 608: Cleaning up netns ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:14.991 [INFO][4455] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" iface="eth0" netns="/var/run/netns/cni-fed5f332-613f-34f4-5d81-0c95738aebae" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:14.991 [INFO][4455] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" iface="eth0" netns="/var/run/netns/cni-fed5f332-613f-34f4-5d81-0c95738aebae" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:14.991 [INFO][4455] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" iface="eth0" netns="/var/run/netns/cni-fed5f332-613f-34f4-5d81-0c95738aebae" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:14.991 [INFO][4455] k8s.go 615: Releasing IP address(es) ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:14.992 [INFO][4455] utils.go 188: Calico CNI releasing IP address ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:15.032 [INFO][4468] ipam_plugin.go 417: Releasing address using handleID ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:15.033 [INFO][4468] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:15.033 [INFO][4468] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:15.037 [WARNING][4468] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:15.038 [INFO][4468] ipam_plugin.go 445: Releasing address using workloadID ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:15.039 [INFO][4468] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:15.049605 containerd[1635]: 2024-10-09 01:07:15.044 [INFO][4455] k8s.go 621: Teardown processing complete. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:15.050866 containerd[1635]: time="2024-10-09T01:07:15.050684175Z" level=info msg="TearDown network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\" successfully" Oct 9 01:07:15.050866 containerd[1635]: time="2024-10-09T01:07:15.050704803Z" level=info msg="StopPodSandbox for \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\" returns successfully" Oct 9 01:07:15.052688 containerd[1635]: time="2024-10-09T01:07:15.052607471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6zdq4,Uid:c783c086-b681-4894-9dd2-660195d788ef,Namespace:kube-system,Attempt:1,}" Oct 9 01:07:15.054140 systemd[1]: run-netns-cni\x2dfed5f332\x2d613f\x2d34f4\x2d5d81\x2d0c95738aebae.mount: Deactivated successfully. Oct 9 01:07:15.217564 systemd-networkd[1262]: calid0bb8c34947: Link UP Oct 9 01:07:15.220024 systemd-networkd[1262]: calid0bb8c34947: Gained carrier Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.104 [INFO][4477] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0 csi-node-driver- calico-system 8316d30b-f132-4ca2-a04b-4276c8d6a2b0 701 0 2024-10-09 01:06:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4116-0-0-f-4ef11beaf3 csi-node-driver-kqtdh eth0 default [] [] [kns.calico-system ksa.calico-system.default] calid0bb8c34947 [] []}} ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.104 [INFO][4477] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.163 [INFO][4502] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" HandleID="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.174 [INFO][4502] ipam_plugin.go 270: Auto assigning IP ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" HandleID="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4116-0-0-f-4ef11beaf3", "pod":"csi-node-driver-kqtdh", "timestamp":"2024-10-09 01:07:15.163856992 +0000 UTC"}, Hostname:"ci-4116-0-0-f-4ef11beaf3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.174 [INFO][4502] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.174 [INFO][4502] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.175 [INFO][4502] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-f-4ef11beaf3' Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.176 [INFO][4502] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.181 [INFO][4502] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.185 [INFO][4502] ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.187 [INFO][4502] ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.189 [INFO][4502] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.189 [INFO][4502] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.191 [INFO][4502] ipam.go 1685: Creating new handle: k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.195 [INFO][4502] ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.203 [INFO][4502] ipam.go 1216: Successfully claimed IPs: [192.168.75.3/26] block=192.168.75.0/26 handle="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.206 [INFO][4502] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.3/26] handle="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.206 [INFO][4502] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:15.247001 containerd[1635]: 2024-10-09 01:07:15.206 [INFO][4502] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.75.3/26] IPv6=[] ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" HandleID="k8s-pod-network.ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.247551 containerd[1635]: 2024-10-09 01:07:15.210 [INFO][4477] k8s.go 386: Populated endpoint ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8316d30b-f132-4ca2-a04b-4276c8d6a2b0", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"", Pod:"csi-node-driver-kqtdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid0bb8c34947", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:15.247551 containerd[1635]: 2024-10-09 01:07:15.211 [INFO][4477] k8s.go 387: Calico CNI using IPs: [192.168.75.3/32] ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.247551 containerd[1635]: 2024-10-09 01:07:15.211 [INFO][4477] dataplane_linux.go 68: Setting the host side veth name to calid0bb8c34947 ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.247551 containerd[1635]: 2024-10-09 01:07:15.221 [INFO][4477] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.247551 containerd[1635]: 2024-10-09 01:07:15.223 [INFO][4477] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8316d30b-f132-4ca2-a04b-4276c8d6a2b0", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e", Pod:"csi-node-driver-kqtdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid0bb8c34947", MAC:"96:d1:38:e7:1c:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:15.247551 containerd[1635]: 2024-10-09 01:07:15.239 [INFO][4477] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e" Namespace="calico-system" Pod="csi-node-driver-kqtdh" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:15.286019 systemd-networkd[1262]: calie4508ece0bd: Link UP Oct 9 01:07:15.286226 systemd-networkd[1262]: calie4508ece0bd: Gained carrier Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.130 [INFO][4486] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0 coredns-76f75df574- kube-system c783c086-b681-4894-9dd2-660195d788ef 702 0 2024-10-09 01:06:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4116-0-0-f-4ef11beaf3 coredns-76f75df574-6zdq4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie4508ece0bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.130 [INFO][4486] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.199 [INFO][4506] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" HandleID="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.209 [INFO][4506] ipam_plugin.go 270: Auto assigning IP ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" HandleID="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318a30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4116-0-0-f-4ef11beaf3", "pod":"coredns-76f75df574-6zdq4", "timestamp":"2024-10-09 01:07:15.199076358 +0000 UTC"}, Hostname:"ci-4116-0-0-f-4ef11beaf3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.209 [INFO][4506] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.209 [INFO][4506] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.209 [INFO][4506] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-f-4ef11beaf3' Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.212 [INFO][4506] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.217 [INFO][4506] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.226 [INFO][4506] ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.229 [INFO][4506] ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.240 [INFO][4506] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.240 [INFO][4506] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.252 [INFO][4506] ipam.go 1685: Creating new handle: k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769 Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.259 [INFO][4506] ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.272 [INFO][4506] ipam.go 1216: Successfully claimed IPs: [192.168.75.4/26] block=192.168.75.0/26 handle="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.272 [INFO][4506] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.4/26] handle="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.273 [INFO][4506] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:15.311768 containerd[1635]: 2024-10-09 01:07:15.273 [INFO][4506] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.75.4/26] IPv6=[] ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" HandleID="k8s-pod-network.4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.312381 containerd[1635]: 2024-10-09 01:07:15.280 [INFO][4486] k8s.go 386: Populated endpoint ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c783c086-b681-4894-9dd2-660195d788ef", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"", Pod:"coredns-76f75df574-6zdq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4508ece0bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:15.312381 containerd[1635]: 2024-10-09 01:07:15.281 [INFO][4486] k8s.go 387: Calico CNI using IPs: [192.168.75.4/32] ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.312381 containerd[1635]: 2024-10-09 01:07:15.281 [INFO][4486] dataplane_linux.go 68: Setting the host side veth name to calie4508ece0bd ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.312381 containerd[1635]: 2024-10-09 01:07:15.284 [INFO][4486] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.312381 containerd[1635]: 2024-10-09 01:07:15.285 [INFO][4486] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c783c086-b681-4894-9dd2-660195d788ef", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769", Pod:"coredns-76f75df574-6zdq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4508ece0bd", MAC:"8a:9f:55:f6:7d:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:15.312381 containerd[1635]: 2024-10-09 01:07:15.301 [INFO][4486] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769" Namespace="kube-system" Pod="coredns-76f75df574-6zdq4" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:15.321947 containerd[1635]: time="2024-10-09T01:07:15.321860586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:15.325460 containerd[1635]: time="2024-10-09T01:07:15.325037932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:15.325460 containerd[1635]: time="2024-10-09T01:07:15.325053342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:15.325460 containerd[1635]: time="2024-10-09T01:07:15.325248110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:15.360210 containerd[1635]: time="2024-10-09T01:07:15.359115119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:15.360210 containerd[1635]: time="2024-10-09T01:07:15.359188978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:15.360210 containerd[1635]: time="2024-10-09T01:07:15.359198357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:15.360210 containerd[1635]: time="2024-10-09T01:07:15.359721495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:15.420652 containerd[1635]: time="2024-10-09T01:07:15.420601426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kqtdh,Uid:8316d30b-f132-4ca2-a04b-4276c8d6a2b0,Namespace:calico-system,Attempt:1,} returns sandbox id \"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e\"" Oct 9 01:07:15.444491 containerd[1635]: time="2024-10-09T01:07:15.444456316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6zdq4,Uid:c783c086-b681-4894-9dd2-660195d788ef,Namespace:kube-system,Attempt:1,} returns sandbox id \"4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769\"" Oct 9 01:07:15.449941 containerd[1635]: time="2024-10-09T01:07:15.449915257Z" level=info msg="CreateContainer within sandbox \"4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 9 01:07:15.465202 containerd[1635]: time="2024-10-09T01:07:15.465169116Z" level=info msg="CreateContainer within sandbox \"4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b74024ae921f56d1dade12b8ab132a5e05428c1f7526d763704bb0eab5f8de13\"" Oct 9 01:07:15.465816 containerd[1635]: time="2024-10-09T01:07:15.465722882Z" level=info msg="StartContainer for \"b74024ae921f56d1dade12b8ab132a5e05428c1f7526d763704bb0eab5f8de13\"" Oct 9 01:07:15.466286 containerd[1635]: time="2024-10-09T01:07:15.466234289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:15.468291 containerd[1635]: time="2024-10-09T01:07:15.468111359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=33507125" Oct 9 01:07:15.469364 containerd[1635]: time="2024-10-09T01:07:15.469240774Z" level=info msg="ImageCreate event name:\"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:15.473674 containerd[1635]: time="2024-10-09T01:07:15.473645592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:15.474402 containerd[1635]: time="2024-10-09T01:07:15.474271445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"34999494\" in 3.022832927s" Oct 9 01:07:15.474402 containerd[1635]: time="2024-10-09T01:07:15.474296732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:9d19dff735fa0889ad6e741790dd1ff35dc4443f14c95bd61459ff0b9162252e\"" Oct 9 01:07:15.475128 containerd[1635]: time="2024-10-09T01:07:15.475109148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 9 01:07:15.484273 containerd[1635]: time="2024-10-09T01:07:15.484142066Z" level=info msg="CreateContainer within sandbox \"8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 9 01:07:15.496787 containerd[1635]: time="2024-10-09T01:07:15.496708895Z" level=info msg="CreateContainer within sandbox \"8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"dbfbaf265f6a8afcaefd13cff0f8278c487ff9ac711203a1c2ca412596cb9969\"" Oct 9 01:07:15.498560 containerd[1635]: time="2024-10-09T01:07:15.498406314Z" level=info msg="StartContainer for \"dbfbaf265f6a8afcaefd13cff0f8278c487ff9ac711203a1c2ca412596cb9969\"" Oct 9 01:07:15.535392 containerd[1635]: time="2024-10-09T01:07:15.535322309Z" level=info msg="StartContainer for \"b74024ae921f56d1dade12b8ab132a5e05428c1f7526d763704bb0eab5f8de13\" returns successfully" Oct 9 01:07:15.608266 containerd[1635]: time="2024-10-09T01:07:15.608171839Z" level=info msg="StartContainer for \"dbfbaf265f6a8afcaefd13cff0f8278c487ff9ac711203a1c2ca412596cb9969\" returns successfully" Oct 9 01:07:16.156616 kubelet[2975]: I1009 01:07:16.156582 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d677fb677-mcqqn" podStartSLOduration=24.132912215 podStartE2EDuration="27.156546956s" podCreationTimestamp="2024-10-09 01:06:49 +0000 UTC" firstStartedPulling="2024-10-09 01:07:12.450869524 +0000 UTC m=+43.659114131" lastFinishedPulling="2024-10-09 01:07:15.474504265 +0000 UTC m=+46.682748872" observedRunningTime="2024-10-09 01:07:16.144381161 +0000 UTC m=+47.352625768" watchObservedRunningTime="2024-10-09 01:07:16.156546956 +0000 UTC m=+47.364791564" Oct 9 01:07:16.209860 kubelet[2975]: I1009 01:07:16.209447 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6zdq4" podStartSLOduration=33.209408956 podStartE2EDuration="33.209408956s" podCreationTimestamp="2024-10-09 01:06:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-09 01:07:16.157214168 +0000 UTC m=+47.365458785" watchObservedRunningTime="2024-10-09 01:07:16.209408956 +0000 UTC m=+47.417653563" Oct 9 01:07:16.939162 systemd-networkd[1262]: calie4508ece0bd: Gained IPv6LL Oct 9 01:07:17.132103 systemd-networkd[1262]: calid0bb8c34947: Gained IPv6LL Oct 9 01:07:17.146497 containerd[1635]: time="2024-10-09T01:07:17.146429186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:17.147646 containerd[1635]: time="2024-10-09T01:07:17.147588470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7642081" Oct 9 01:07:17.148814 containerd[1635]: time="2024-10-09T01:07:17.148768643Z" level=info msg="ImageCreate event name:\"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:17.150880 containerd[1635]: time="2024-10-09T01:07:17.150830066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:17.151605 containerd[1635]: time="2024-10-09T01:07:17.151427096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"9134482\" in 1.675127366s" Oct 9 01:07:17.151605 containerd[1635]: time="2024-10-09T01:07:17.151461110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:d0c7782dfd1af19483b1da01b3d6692a92c2a570a3c8c6059128fda84c838a61\"" Oct 9 01:07:17.154487 containerd[1635]: time="2024-10-09T01:07:17.154454296Z" level=info msg="CreateContainer within sandbox \"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 9 01:07:17.185632 containerd[1635]: time="2024-10-09T01:07:17.185583476Z" level=info msg="CreateContainer within sandbox \"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dd7f3a20d4f7d01d3fcb41be4fff03c98c8d64d096cebb42a1dc2df0857b0183\"" Oct 9 01:07:17.186375 containerd[1635]: time="2024-10-09T01:07:17.186351410Z" level=info msg="StartContainer for \"dd7f3a20d4f7d01d3fcb41be4fff03c98c8d64d096cebb42a1dc2df0857b0183\"" Oct 9 01:07:17.227720 systemd[1]: run-containerd-runc-k8s.io-dd7f3a20d4f7d01d3fcb41be4fff03c98c8d64d096cebb42a1dc2df0857b0183-runc.BYup7m.mount: Deactivated successfully. Oct 9 01:07:17.272065 containerd[1635]: time="2024-10-09T01:07:17.272032576Z" level=info msg="StartContainer for \"dd7f3a20d4f7d01d3fcb41be4fff03c98c8d64d096cebb42a1dc2df0857b0183\" returns successfully" Oct 9 01:07:17.274246 containerd[1635]: time="2024-10-09T01:07:17.274224594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 9 01:07:19.188893 containerd[1635]: time="2024-10-09T01:07:19.188808208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:19.190574 containerd[1635]: time="2024-10-09T01:07:19.190541754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12907822" Oct 9 01:07:19.192119 containerd[1635]: time="2024-10-09T01:07:19.191526529Z" level=info msg="ImageCreate event name:\"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:19.194447 containerd[1635]: time="2024-10-09T01:07:19.194413790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:19.195999 containerd[1635]: time="2024-10-09T01:07:19.195938439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"14400175\" in 1.921669743s" Oct 9 01:07:19.196281 containerd[1635]: time="2024-10-09T01:07:19.196264336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:d1ca8f023879d2e9a9a7c98dbb3252886c5b7676be9529ddb5200aa2789b233e\"" Oct 9 01:07:19.200111 containerd[1635]: time="2024-10-09T01:07:19.200070529Z" level=info msg="CreateContainer within sandbox \"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 9 01:07:19.226934 containerd[1635]: time="2024-10-09T01:07:19.226893203Z" level=info msg="CreateContainer within sandbox \"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8069584a49c3e282c10c362c2574396394c0fb56441ce81fcd885a98c9e3f443\"" Oct 9 01:07:19.228803 containerd[1635]: time="2024-10-09T01:07:19.227506024Z" level=info msg="StartContainer for \"8069584a49c3e282c10c362c2574396394c0fb56441ce81fcd885a98c9e3f443\"" Oct 9 01:07:19.315018 containerd[1635]: time="2024-10-09T01:07:19.314986724Z" level=info msg="StartContainer for \"8069584a49c3e282c10c362c2574396394c0fb56441ce81fcd885a98c9e3f443\" returns successfully" Oct 9 01:07:20.091440 kubelet[2975]: I1009 01:07:20.091369 2975 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 9 01:07:20.093707 kubelet[2975]: I1009 01:07:20.093656 2975 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 9 01:07:20.154933 kubelet[2975]: I1009 01:07:20.154887 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-kqtdh" podStartSLOduration=27.380109843 podStartE2EDuration="31.154853326s" podCreationTimestamp="2024-10-09 01:06:49 +0000 UTC" firstStartedPulling="2024-10-09 01:07:15.421848804 +0000 UTC m=+46.630093412" lastFinishedPulling="2024-10-09 01:07:19.196592288 +0000 UTC m=+50.404836895" observedRunningTime="2024-10-09 01:07:20.154266003 +0000 UTC m=+51.362510610" watchObservedRunningTime="2024-10-09 01:07:20.154853326 +0000 UTC m=+51.363097934" Oct 9 01:07:22.767855 systemd[1]: run-containerd-runc-k8s.io-b69179aae27e235744cf144d7a7406bd37361716d277d0c729c491bdb162ccaf-runc.rA4IcH.mount: Deactivated successfully. Oct 9 01:07:23.998937 kubelet[2975]: I1009 01:07:23.998872 2975 topology_manager.go:215] "Topology Admit Handler" podUID="120d0dbb-1ee3-4c81-8fb8-9af9a0405a63" podNamespace="calico-apiserver" podName="calico-apiserver-5698f4bb89-kfm2c" Oct 9 01:07:24.001202 kubelet[2975]: I1009 01:07:23.999201 2975 topology_manager.go:215] "Topology Admit Handler" podUID="e5337afd-14f2-4837-8cc8-e85ef6102323" podNamespace="calico-apiserver" podName="calico-apiserver-5698f4bb89-xl2z6" Oct 9 01:07:24.112955 kubelet[2975]: I1009 01:07:24.112917 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5337afd-14f2-4837-8cc8-e85ef6102323-calico-apiserver-certs\") pod \"calico-apiserver-5698f4bb89-xl2z6\" (UID: \"e5337afd-14f2-4837-8cc8-e85ef6102323\") " pod="calico-apiserver/calico-apiserver-5698f4bb89-xl2z6" Oct 9 01:07:24.122393 kubelet[2975]: I1009 01:07:24.122313 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz5xd\" (UniqueName: \"kubernetes.io/projected/120d0dbb-1ee3-4c81-8fb8-9af9a0405a63-kube-api-access-lz5xd\") pod \"calico-apiserver-5698f4bb89-kfm2c\" (UID: \"120d0dbb-1ee3-4c81-8fb8-9af9a0405a63\") " pod="calico-apiserver/calico-apiserver-5698f4bb89-kfm2c" Oct 9 01:07:24.122393 kubelet[2975]: I1009 01:07:24.122357 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/120d0dbb-1ee3-4c81-8fb8-9af9a0405a63-calico-apiserver-certs\") pod \"calico-apiserver-5698f4bb89-kfm2c\" (UID: \"120d0dbb-1ee3-4c81-8fb8-9af9a0405a63\") " pod="calico-apiserver/calico-apiserver-5698f4bb89-kfm2c" Oct 9 01:07:24.122780 kubelet[2975]: I1009 01:07:24.122513 2975 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhvdj\" (UniqueName: \"kubernetes.io/projected/e5337afd-14f2-4837-8cc8-e85ef6102323-kube-api-access-qhvdj\") pod \"calico-apiserver-5698f4bb89-xl2z6\" (UID: \"e5337afd-14f2-4837-8cc8-e85ef6102323\") " pod="calico-apiserver/calico-apiserver-5698f4bb89-xl2z6" Oct 9 01:07:24.224332 kubelet[2975]: E1009 01:07:24.224186 2975 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:07:24.226132 kubelet[2975]: E1009 01:07:24.224186 2975 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:07:24.227688 kubelet[2975]: E1009 01:07:24.227099 2975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120d0dbb-1ee3-4c81-8fb8-9af9a0405a63-calico-apiserver-certs podName:120d0dbb-1ee3-4c81-8fb8-9af9a0405a63 nodeName:}" failed. No retries permitted until 2024-10-09 01:07:24.724256444 +0000 UTC m=+55.932501061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/120d0dbb-1ee3-4c81-8fb8-9af9a0405a63-calico-apiserver-certs") pod "calico-apiserver-5698f4bb89-kfm2c" (UID: "120d0dbb-1ee3-4c81-8fb8-9af9a0405a63") : secret "calico-apiserver-certs" not found Oct 9 01:07:24.227688 kubelet[2975]: E1009 01:07:24.227139 2975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5337afd-14f2-4837-8cc8-e85ef6102323-calico-apiserver-certs podName:e5337afd-14f2-4837-8cc8-e85ef6102323 nodeName:}" failed. No retries permitted until 2024-10-09 01:07:24.727123049 +0000 UTC m=+55.935367657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e5337afd-14f2-4837-8cc8-e85ef6102323-calico-apiserver-certs") pod "calico-apiserver-5698f4bb89-xl2z6" (UID: "e5337afd-14f2-4837-8cc8-e85ef6102323") : secret "calico-apiserver-certs" not found Oct 9 01:07:24.730445 kubelet[2975]: E1009 01:07:24.730255 2975 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:07:24.730445 kubelet[2975]: E1009 01:07:24.730302 2975 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 9 01:07:24.730445 kubelet[2975]: E1009 01:07:24.730350 2975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e5337afd-14f2-4837-8cc8-e85ef6102323-calico-apiserver-certs podName:e5337afd-14f2-4837-8cc8-e85ef6102323 nodeName:}" failed. No retries permitted until 2024-10-09 01:07:25.730325527 +0000 UTC m=+56.938570165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e5337afd-14f2-4837-8cc8-e85ef6102323-calico-apiserver-certs") pod "calico-apiserver-5698f4bb89-xl2z6" (UID: "e5337afd-14f2-4837-8cc8-e85ef6102323") : secret "calico-apiserver-certs" not found Oct 9 01:07:24.730445 kubelet[2975]: E1009 01:07:24.730392 2975 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/120d0dbb-1ee3-4c81-8fb8-9af9a0405a63-calico-apiserver-certs podName:120d0dbb-1ee3-4c81-8fb8-9af9a0405a63 nodeName:}" failed. No retries permitted until 2024-10-09 01:07:25.730371416 +0000 UTC m=+56.938616053 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/120d0dbb-1ee3-4c81-8fb8-9af9a0405a63-calico-apiserver-certs") pod "calico-apiserver-5698f4bb89-kfm2c" (UID: "120d0dbb-1ee3-4c81-8fb8-9af9a0405a63") : secret "calico-apiserver-certs" not found Oct 9 01:07:25.823012 containerd[1635]: time="2024-10-09T01:07:25.822443396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5698f4bb89-kfm2c,Uid:120d0dbb-1ee3-4c81-8fb8-9af9a0405a63,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:07:25.823012 containerd[1635]: time="2024-10-09T01:07:25.822443707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5698f4bb89-xl2z6,Uid:e5337afd-14f2-4837-8cc8-e85ef6102323,Namespace:calico-apiserver,Attempt:0,}" Oct 9 01:07:25.971280 systemd-networkd[1262]: cali707795cd980: Link UP Oct 9 01:07:25.973943 systemd-networkd[1262]: cali707795cd980: Gained carrier Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.884 [INFO][4841] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0 calico-apiserver-5698f4bb89- calico-apiserver 120d0dbb-1ee3-4c81-8fb8-9af9a0405a63 809 0 2024-10-09 01:07:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5698f4bb89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-f-4ef11beaf3 calico-apiserver-5698f4bb89-kfm2c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali707795cd980 [] []}} ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.885 [INFO][4841] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.915 [INFO][4863] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" HandleID="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.932 [INFO][4863] ipam_plugin.go 270: Auto assigning IP ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" HandleID="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001169e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-f-4ef11beaf3", "pod":"calico-apiserver-5698f4bb89-kfm2c", "timestamp":"2024-10-09 01:07:25.915292679 +0000 UTC"}, Hostname:"ci-4116-0-0-f-4ef11beaf3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.932 [INFO][4863] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.932 [INFO][4863] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.932 [INFO][4863] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-f-4ef11beaf3' Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.933 [INFO][4863] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.938 [INFO][4863] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.942 [INFO][4863] ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.943 [INFO][4863] ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.946 [INFO][4863] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.946 [INFO][4863] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.947 [INFO][4863] ipam.go 1685: Creating new handle: k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0 Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.951 [INFO][4863] ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.956 [INFO][4863] ipam.go 1216: Successfully claimed IPs: [192.168.75.5/26] block=192.168.75.0/26 handle="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.956 [INFO][4863] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.5/26] handle="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.956 [INFO][4863] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:25.985721 containerd[1635]: 2024-10-09 01:07:25.956 [INFO][4863] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.75.5/26] IPv6=[] ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" HandleID="k8s-pod-network.8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" Oct 9 01:07:25.987928 containerd[1635]: 2024-10-09 01:07:25.959 [INFO][4841] k8s.go 386: Populated endpoint ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0", GenerateName:"calico-apiserver-5698f4bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"120d0dbb-1ee3-4c81-8fb8-9af9a0405a63", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5698f4bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"", Pod:"calico-apiserver-5698f4bb89-kfm2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali707795cd980", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:25.987928 containerd[1635]: 2024-10-09 01:07:25.959 [INFO][4841] k8s.go 387: Calico CNI using IPs: [192.168.75.5/32] ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" Oct 9 01:07:25.987928 containerd[1635]: 2024-10-09 01:07:25.959 [INFO][4841] dataplane_linux.go 68: Setting the host side veth name to cali707795cd980 ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" Oct 9 01:07:25.987928 containerd[1635]: 2024-10-09 01:07:25.968 [INFO][4841] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" Oct 9 01:07:25.987928 containerd[1635]: 2024-10-09 01:07:25.969 [INFO][4841] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0", GenerateName:"calico-apiserver-5698f4bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"120d0dbb-1ee3-4c81-8fb8-9af9a0405a63", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5698f4bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0", Pod:"calico-apiserver-5698f4bb89-kfm2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali707795cd980", MAC:"0a:a7:e3:18:9a:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:25.987928 containerd[1635]: 2024-10-09 01:07:25.980 [INFO][4841] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-kfm2c" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--kfm2c-eth0" Oct 9 01:07:25.999689 systemd-networkd[1262]: cali3f5cafdb19e: Link UP Oct 9 01:07:26.001301 systemd-networkd[1262]: cali3f5cafdb19e: Gained carrier Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.884 [INFO][4848] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0 calico-apiserver-5698f4bb89- calico-apiserver e5337afd-14f2-4837-8cc8-e85ef6102323 807 0 2024-10-09 01:07:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5698f4bb89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4116-0-0-f-4ef11beaf3 calico-apiserver-5698f4bb89-xl2z6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f5cafdb19e [] []}} ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.884 [INFO][4848] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.934 [INFO][4864] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" HandleID="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.944 [INFO][4864] ipam_plugin.go 270: Auto assigning IP ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" HandleID="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e56c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4116-0-0-f-4ef11beaf3", "pod":"calico-apiserver-5698f4bb89-xl2z6", "timestamp":"2024-10-09 01:07:25.934965611 +0000 UTC"}, Hostname:"ci-4116-0-0-f-4ef11beaf3", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.944 [INFO][4864] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.956 [INFO][4864] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.956 [INFO][4864] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4116-0-0-f-4ef11beaf3' Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.958 [INFO][4864] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.962 [INFO][4864] ipam.go 372: Looking up existing affinities for host host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.965 [INFO][4864] ipam.go 489: Trying affinity for 192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.966 [INFO][4864] ipam.go 155: Attempting to load block cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.968 [INFO][4864] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.75.0/26 host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.968 [INFO][4864] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.75.0/26 handle="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.971 [INFO][4864] ipam.go 1685: Creating new handle: k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0 Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.978 [INFO][4864] ipam.go 1203: Writing block in order to claim IPs block=192.168.75.0/26 handle="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.988 [INFO][4864] ipam.go 1216: Successfully claimed IPs: [192.168.75.6/26] block=192.168.75.0/26 handle="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.988 [INFO][4864] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.75.6/26] handle="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" host="ci-4116-0-0-f-4ef11beaf3" Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.988 [INFO][4864] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:26.013430 containerd[1635]: 2024-10-09 01:07:25.988 [INFO][4864] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.75.6/26] IPv6=[] ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" HandleID="k8s-pod-network.1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" Oct 9 01:07:26.018644 containerd[1635]: 2024-10-09 01:07:25.995 [INFO][4848] k8s.go 386: Populated endpoint ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0", GenerateName:"calico-apiserver-5698f4bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5337afd-14f2-4837-8cc8-e85ef6102323", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5698f4bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"", Pod:"calico-apiserver-5698f4bb89-xl2z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f5cafdb19e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:26.018644 containerd[1635]: 2024-10-09 01:07:25.995 [INFO][4848] k8s.go 387: Calico CNI using IPs: [192.168.75.6/32] ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" Oct 9 01:07:26.018644 containerd[1635]: 2024-10-09 01:07:25.995 [INFO][4848] dataplane_linux.go 68: Setting the host side veth name to cali3f5cafdb19e ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" Oct 9 01:07:26.018644 containerd[1635]: 2024-10-09 01:07:25.998 [INFO][4848] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" Oct 9 01:07:26.018644 containerd[1635]: 2024-10-09 01:07:25.999 [INFO][4848] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0", GenerateName:"calico-apiserver-5698f4bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5337afd-14f2-4837-8cc8-e85ef6102323", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 7, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5698f4bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0", Pod:"calico-apiserver-5698f4bb89-xl2z6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.75.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f5cafdb19e", MAC:"ba:64:3a:74:7a:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:26.018644 containerd[1635]: 2024-10-09 01:07:26.006 [INFO][4848] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0" Namespace="calico-apiserver" Pod="calico-apiserver-5698f4bb89-xl2z6" WorkloadEndpoint="ci--4116--0--0--f--4ef11beaf3-k8s-calico--apiserver--5698f4bb89--xl2z6-eth0" Oct 9 01:07:26.058792 containerd[1635]: time="2024-10-09T01:07:26.049482906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:26.058792 containerd[1635]: time="2024-10-09T01:07:26.049527801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:26.058792 containerd[1635]: time="2024-10-09T01:07:26.049537629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:26.058792 containerd[1635]: time="2024-10-09T01:07:26.049641908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:26.098360 containerd[1635]: time="2024-10-09T01:07:26.097107429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 9 01:07:26.098360 containerd[1635]: time="2024-10-09T01:07:26.097157726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 9 01:07:26.098360 containerd[1635]: time="2024-10-09T01:07:26.097171893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:26.100225 containerd[1635]: time="2024-10-09T01:07:26.100131177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 9 01:07:26.164999 containerd[1635]: time="2024-10-09T01:07:26.163603449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5698f4bb89-xl2z6,Uid:e5337afd-14f2-4837-8cc8-e85ef6102323,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0\"" Oct 9 01:07:26.170062 containerd[1635]: time="2024-10-09T01:07:26.170036418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:07:26.195025 containerd[1635]: time="2024-10-09T01:07:26.194687109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5698f4bb89-kfm2c,Uid:120d0dbb-1ee3-4c81-8fb8-9af9a0405a63,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0\"" Oct 9 01:07:27.948201 systemd-networkd[1262]: cali707795cd980: Gained IPv6LL Oct 9 01:07:27.948668 systemd-networkd[1262]: cali3f5cafdb19e: Gained IPv6LL Oct 9 01:07:28.867292 containerd[1635]: time="2024-10-09T01:07:28.867139795Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:28.869779 containerd[1635]: time="2024-10-09T01:07:28.869746585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=40419849" Oct 9 01:07:28.871877 containerd[1635]: time="2024-10-09T01:07:28.871516833Z" level=info msg="ImageCreate event name:\"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:28.877063 containerd[1635]: time="2024-10-09T01:07:28.876674735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:28.890913 containerd[1635]: time="2024-10-09T01:07:28.890881635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 2.720816062s" Oct 9 01:07:28.891047 containerd[1635]: time="2024-10-09T01:07:28.891031450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:07:28.900798 containerd[1635]: time="2024-10-09T01:07:28.900607278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 9 01:07:28.905132 containerd[1635]: time="2024-10-09T01:07:28.905092823Z" level=info msg="CreateContainer within sandbox \"1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:07:28.923035 containerd[1635]: time="2024-10-09T01:07:28.922890724Z" level=info msg="CreateContainer within sandbox \"1f51f6a54b2ed567da5e4dbf62243c48a4b239cdec8a59238a5ee2942e926aa0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"44de7560bd153310c4c2bd18b9400e8e0f3b45ca58c92a4aca3a9223bd0eae93\"" Oct 9 01:07:28.931949 containerd[1635]: time="2024-10-09T01:07:28.931928307Z" level=info msg="StartContainer for \"44de7560bd153310c4c2bd18b9400e8e0f3b45ca58c92a4aca3a9223bd0eae93\"" Oct 9 01:07:28.933572 containerd[1635]: time="2024-10-09T01:07:28.933301019Z" level=info msg="StopPodSandbox for \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\"" Oct 9 01:07:29.063247 containerd[1635]: time="2024-10-09T01:07:29.063164654Z" level=info msg="StartContainer for \"44de7560bd153310c4c2bd18b9400e8e0f3b45ca58c92a4aca3a9223bd0eae93\" returns successfully" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.047 [WARNING][5028] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928", Pod:"coredns-76f75df574-m4ftf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88bcd0c5e2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.048 [INFO][5028] k8s.go 608: Cleaning up netns ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.048 [INFO][5028] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" iface="eth0" netns="" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.048 [INFO][5028] k8s.go 615: Releasing IP address(es) ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.048 [INFO][5028] utils.go 188: Calico CNI releasing IP address ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.092 [INFO][5075] ipam_plugin.go 417: Releasing address using handleID ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.092 [INFO][5075] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.092 [INFO][5075] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.098 [WARNING][5075] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.098 [INFO][5075] ipam_plugin.go 445: Releasing address using workloadID ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.099 [INFO][5075] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.109137 containerd[1635]: 2024-10-09 01:07:29.103 [INFO][5028] k8s.go 621: Teardown processing complete. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.111060 containerd[1635]: time="2024-10-09T01:07:29.111033971Z" level=info msg="TearDown network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\" successfully" Oct 9 01:07:29.111207 containerd[1635]: time="2024-10-09T01:07:29.111140384Z" level=info msg="StopPodSandbox for \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\" returns successfully" Oct 9 01:07:29.114956 containerd[1635]: time="2024-10-09T01:07:29.114925578Z" level=info msg="RemovePodSandbox for \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\"" Oct 9 01:07:29.124855 containerd[1635]: time="2024-10-09T01:07:29.124778841Z" level=info msg="Forcibly stopping sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\"" Oct 9 01:07:29.194007 kubelet[2975]: I1009 01:07:29.193782 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5698f4bb89-xl2z6" podStartSLOduration=3.464093432 podStartE2EDuration="6.19371736s" podCreationTimestamp="2024-10-09 01:07:23 +0000 UTC" firstStartedPulling="2024-10-09 01:07:26.169654322 +0000 UTC m=+57.377898929" lastFinishedPulling="2024-10-09 01:07:28.89927824 +0000 UTC m=+60.107522857" observedRunningTime="2024-10-09 01:07:29.189617256 +0000 UTC m=+60.397861863" watchObservedRunningTime="2024-10-09 01:07:29.19371736 +0000 UTC m=+60.401961966" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.170 [WARNING][5105] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f443b1d9-bed1-4b25-880d-a05ee8cfe5b8", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"9066af073078c2f33bf4f5ee165d1081179ad0c134d242848c81d9b44433e928", Pod:"coredns-76f75df574-m4ftf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali88bcd0c5e2e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.171 [INFO][5105] k8s.go 608: Cleaning up netns ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.171 [INFO][5105] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" iface="eth0" netns="" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.171 [INFO][5105] k8s.go 615: Releasing IP address(es) ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.171 [INFO][5105] utils.go 188: Calico CNI releasing IP address ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.216 [INFO][5118] ipam_plugin.go 417: Releasing address using handleID ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.218 [INFO][5118] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.218 [INFO][5118] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.224 [WARNING][5118] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.224 [INFO][5118] ipam_plugin.go 445: Releasing address using workloadID ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" HandleID="k8s-pod-network.823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--m4ftf-eth0" Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.227 [INFO][5118] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.233104 containerd[1635]: 2024-10-09 01:07:29.230 [INFO][5105] k8s.go 621: Teardown processing complete. ContainerID="823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c" Oct 9 01:07:29.234007 containerd[1635]: time="2024-10-09T01:07:29.233545202Z" level=info msg="TearDown network for sandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\" successfully" Oct 9 01:07:29.244205 containerd[1635]: time="2024-10-09T01:07:29.243909689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:29.244205 containerd[1635]: time="2024-10-09T01:07:29.243969723Z" level=info msg="RemovePodSandbox \"823236011fa09f9bfbc044db6dc522f6e8bc4e4999b18817718eadcfc79cbb3c\" returns successfully" Oct 9 01:07:29.252208 containerd[1635]: time="2024-10-09T01:07:29.250960367Z" level=info msg="StopPodSandbox for \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\"" Oct 9 01:07:29.343355 containerd[1635]: time="2024-10-09T01:07:29.342339677Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 9 01:07:29.346459 containerd[1635]: time="2024-10-09T01:07:29.346432296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 9 01:07:29.349550 containerd[1635]: time="2024-10-09T01:07:29.349509282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"41912266\" in 448.875963ms" Oct 9 01:07:29.349886 containerd[1635]: time="2024-10-09T01:07:29.349682482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:91dd0fd3dab3f170b52404ec5e67926439207bf71c08b7f54de8f3db6209537b\"" Oct 9 01:07:29.352204 containerd[1635]: time="2024-10-09T01:07:29.352186067Z" level=info msg="CreateContainer within sandbox \"8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 9 01:07:29.379874 containerd[1635]: time="2024-10-09T01:07:29.379710674Z" level=info msg="CreateContainer within sandbox \"8a7c73fac9c066414a8f1a3305fea76b1727aff23e5daa946ffcfa6e789e9fc0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3046a6e13ecc3987c6c45160fbfa0025850c75d3f87e0064f7c3ef485cb889e0\"" Oct 9 01:07:29.382040 containerd[1635]: time="2024-10-09T01:07:29.382021130Z" level=info msg="StartContainer for \"3046a6e13ecc3987c6c45160fbfa0025850c75d3f87e0064f7c3ef485cb889e0\"" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.331 [WARNING][5146] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8316d30b-f132-4ca2-a04b-4276c8d6a2b0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e", Pod:"csi-node-driver-kqtdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid0bb8c34947", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.332 [INFO][5146] k8s.go 608: Cleaning up netns ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.332 [INFO][5146] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" iface="eth0" netns="" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.332 [INFO][5146] k8s.go 615: Releasing IP address(es) ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.332 [INFO][5146] utils.go 188: Calico CNI releasing IP address ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.384 [INFO][5156] ipam_plugin.go 417: Releasing address using handleID ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.384 [INFO][5156] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.384 [INFO][5156] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.393 [WARNING][5156] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.393 [INFO][5156] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.397 [INFO][5156] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.405754 containerd[1635]: 2024-10-09 01:07:29.403 [INFO][5146] k8s.go 621: Teardown processing complete. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.409142 containerd[1635]: time="2024-10-09T01:07:29.406087224Z" level=info msg="TearDown network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\" successfully" Oct 9 01:07:29.409142 containerd[1635]: time="2024-10-09T01:07:29.406209327Z" level=info msg="StopPodSandbox for \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\" returns successfully" Oct 9 01:07:29.409142 containerd[1635]: time="2024-10-09T01:07:29.406819938Z" level=info msg="RemovePodSandbox for \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\"" Oct 9 01:07:29.409142 containerd[1635]: time="2024-10-09T01:07:29.406839326Z" level=info msg="Forcibly stopping sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\"" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.501 [WARNING][5188] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8316d30b-f132-4ca2-a04b-4276c8d6a2b0", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"ab8f0e86646e67f2bc2b26ed4199430fa77892ce91ff6380f7d5f299fc5e700e", Pod:"csi-node-driver-kqtdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.75.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calid0bb8c34947", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.501 [INFO][5188] k8s.go 608: Cleaning up netns ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.501 [INFO][5188] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" iface="eth0" netns="" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.501 [INFO][5188] k8s.go 615: Releasing IP address(es) ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.502 [INFO][5188] utils.go 188: Calico CNI releasing IP address ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.530 [INFO][5204] ipam_plugin.go 417: Releasing address using handleID ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.531 [INFO][5204] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.532 [INFO][5204] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.536 [WARNING][5204] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.537 [INFO][5204] ipam_plugin.go 445: Releasing address using workloadID ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" HandleID="k8s-pod-network.3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-csi--node--driver--kqtdh-eth0" Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.538 [INFO][5204] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.543678 containerd[1635]: 2024-10-09 01:07:29.541 [INFO][5188] k8s.go 621: Teardown processing complete. ContainerID="3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50" Oct 9 01:07:29.545292 containerd[1635]: time="2024-10-09T01:07:29.544023793Z" level=info msg="TearDown network for sandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\" successfully" Oct 9 01:07:29.558060 containerd[1635]: time="2024-10-09T01:07:29.557761019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:29.558060 containerd[1635]: time="2024-10-09T01:07:29.557812207Z" level=info msg="RemovePodSandbox \"3dc1372266a4f23945155bcdc422050756d1488848934a77f75671a79edb0f50\" returns successfully" Oct 9 01:07:29.559477 containerd[1635]: time="2024-10-09T01:07:29.559344764Z" level=info msg="StopPodSandbox for \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\"" Oct 9 01:07:29.567713 containerd[1635]: time="2024-10-09T01:07:29.567285728Z" level=info msg="StartContainer for \"3046a6e13ecc3987c6c45160fbfa0025850c75d3f87e0064f7c3ef485cb889e0\" returns successfully" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.628 [WARNING][5232] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0", GenerateName:"calico-kube-controllers-7d677fb677-", Namespace:"calico-system", SelfLink:"", UID:"59b2fdb2-7c87-464c-9731-460e7a5b18c0", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d677fb677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a", Pod:"calico-kube-controllers-7d677fb677-mcqqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a5dbc77d26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.628 [INFO][5232] k8s.go 608: Cleaning up netns ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.628 [INFO][5232] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" iface="eth0" netns="" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.628 [INFO][5232] k8s.go 615: Releasing IP address(es) ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.628 [INFO][5232] utils.go 188: Calico CNI releasing IP address ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.649 [INFO][5241] ipam_plugin.go 417: Releasing address using handleID ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.650 [INFO][5241] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.650 [INFO][5241] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.654 [WARNING][5241] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.654 [INFO][5241] ipam_plugin.go 445: Releasing address using workloadID ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.655 [INFO][5241] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.659432 containerd[1635]: 2024-10-09 01:07:29.657 [INFO][5232] k8s.go 621: Teardown processing complete. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.661881 containerd[1635]: time="2024-10-09T01:07:29.660088025Z" level=info msg="TearDown network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\" successfully" Oct 9 01:07:29.661881 containerd[1635]: time="2024-10-09T01:07:29.660245343Z" level=info msg="StopPodSandbox for \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\" returns successfully" Oct 9 01:07:29.662308 containerd[1635]: time="2024-10-09T01:07:29.662174686Z" level=info msg="RemovePodSandbox for \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\"" Oct 9 01:07:29.662418 containerd[1635]: time="2024-10-09T01:07:29.662404292Z" level=info msg="Forcibly stopping sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\"" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.715 [WARNING][5260] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0", GenerateName:"calico-kube-controllers-7d677fb677-", Namespace:"calico-system", SelfLink:"", UID:"59b2fdb2-7c87-464c-9731-460e7a5b18c0", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d677fb677", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"8652ba55342c63d9e74907ed5a88523086cca0d33bb6848dd8f3c778d1ae5e3a", Pod:"calico-kube-controllers-7d677fb677-mcqqn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.75.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8a5dbc77d26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.715 [INFO][5260] k8s.go 608: Cleaning up netns ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.715 [INFO][5260] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" iface="eth0" netns="" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.715 [INFO][5260] k8s.go 615: Releasing IP address(es) ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.715 [INFO][5260] utils.go 188: Calico CNI releasing IP address ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.743 [INFO][5267] ipam_plugin.go 417: Releasing address using handleID ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.743 [INFO][5267] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.743 [INFO][5267] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.747 [WARNING][5267] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.747 [INFO][5267] ipam_plugin.go 445: Releasing address using workloadID ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" HandleID="k8s-pod-network.aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-calico--kube--controllers--7d677fb677--mcqqn-eth0" Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.748 [INFO][5267] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.755355 containerd[1635]: 2024-10-09 01:07:29.751 [INFO][5260] k8s.go 621: Teardown processing complete. ContainerID="aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa" Oct 9 01:07:29.757040 containerd[1635]: time="2024-10-09T01:07:29.755636016Z" level=info msg="TearDown network for sandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\" successfully" Oct 9 01:07:29.760026 containerd[1635]: time="2024-10-09T01:07:29.759950067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:29.760266 containerd[1635]: time="2024-10-09T01:07:29.760148476Z" level=info msg="RemovePodSandbox \"aeab8fff8407842c0a64ffa6e214c31cd8fbc8f7f9e04797a226f7db33baf7fa\" returns successfully" Oct 9 01:07:29.760952 containerd[1635]: time="2024-10-09T01:07:29.760869438Z" level=info msg="StopPodSandbox for \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\"" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.805 [WARNING][5286] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c783c086-b681-4894-9dd2-660195d788ef", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769", Pod:"coredns-76f75df574-6zdq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4508ece0bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.805 [INFO][5286] k8s.go 608: Cleaning up netns ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.805 [INFO][5286] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" iface="eth0" netns="" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.805 [INFO][5286] k8s.go 615: Releasing IP address(es) ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.805 [INFO][5286] utils.go 188: Calico CNI releasing IP address ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.827 [INFO][5292] ipam_plugin.go 417: Releasing address using handleID ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.828 [INFO][5292] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.828 [INFO][5292] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.832 [WARNING][5292] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.832 [INFO][5292] ipam_plugin.go 445: Releasing address using workloadID ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.833 [INFO][5292] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.838598 containerd[1635]: 2024-10-09 01:07:29.836 [INFO][5286] k8s.go 621: Teardown processing complete. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.838598 containerd[1635]: time="2024-10-09T01:07:29.838474500Z" level=info msg="TearDown network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\" successfully" Oct 9 01:07:29.838598 containerd[1635]: time="2024-10-09T01:07:29.838501651Z" level=info msg="StopPodSandbox for \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\" returns successfully" Oct 9 01:07:29.839795 containerd[1635]: time="2024-10-09T01:07:29.839437042Z" level=info msg="RemovePodSandbox for \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\"" Oct 9 01:07:29.839795 containerd[1635]: time="2024-10-09T01:07:29.839459986Z" level=info msg="Forcibly stopping sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\"" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.872 [WARNING][5310] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c783c086-b681-4894-9dd2-660195d788ef", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.October, 9, 1, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4116-0-0-f-4ef11beaf3", ContainerID:"4003f92f8d22e168656ed33b1445482bc149028932130b5933650ba706940769", Pod:"coredns-76f75df574-6zdq4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.75.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie4508ece0bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.872 [INFO][5310] k8s.go 608: Cleaning up netns ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.872 [INFO][5310] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" iface="eth0" netns="" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.872 [INFO][5310] k8s.go 615: Releasing IP address(es) ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.872 [INFO][5310] utils.go 188: Calico CNI releasing IP address ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.892 [INFO][5316] ipam_plugin.go 417: Releasing address using handleID ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.892 [INFO][5316] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.892 [INFO][5316] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.897 [WARNING][5316] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.897 [INFO][5316] ipam_plugin.go 445: Releasing address using workloadID ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" HandleID="k8s-pod-network.67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Workload="ci--4116--0--0--f--4ef11beaf3-k8s-coredns--76f75df574--6zdq4-eth0" Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.898 [INFO][5316] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 9 01:07:29.902818 containerd[1635]: 2024-10-09 01:07:29.900 [INFO][5310] k8s.go 621: Teardown processing complete. ContainerID="67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25" Oct 9 01:07:29.904968 containerd[1635]: time="2024-10-09T01:07:29.902853459Z" level=info msg="TearDown network for sandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\" successfully" Oct 9 01:07:29.906481 containerd[1635]: time="2024-10-09T01:07:29.906347009Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 9 01:07:29.906481 containerd[1635]: time="2024-10-09T01:07:29.906395411Z" level=info msg="RemovePodSandbox \"67f723f734325a2c0d628fc38041aac55770c1fc11e532e3f286e21843c99e25\" returns successfully" Oct 9 01:07:30.195641 kubelet[2975]: I1009 01:07:30.195609 2975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:31.194963 kubelet[2975]: I1009 01:07:31.194923 2975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:07:35.792721 kubelet[2975]: I1009 01:07:35.791169 2975 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5698f4bb89-kfm2c" podStartSLOduration=9.637017702 podStartE2EDuration="12.791114607s" podCreationTimestamp="2024-10-09 01:07:23 +0000 UTC" firstStartedPulling="2024-10-09 01:07:26.1959185 +0000 UTC m=+57.404163107" lastFinishedPulling="2024-10-09 01:07:29.350015405 +0000 UTC m=+60.558260012" observedRunningTime="2024-10-09 01:07:30.198897629 +0000 UTC m=+61.407142246" watchObservedRunningTime="2024-10-09 01:07:35.791114607 +0000 UTC m=+66.999359224" Oct 9 01:07:52.772411 systemd[1]: run-containerd-runc-k8s.io-b69179aae27e235744cf144d7a7406bd37361716d277d0c729c491bdb162ccaf-runc.8K2BcY.mount: Deactivated successfully. Oct 9 01:08:13.172495 kubelet[2975]: I1009 01:08:13.171565 2975 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 9 01:08:22.798421 systemd[1]: run-containerd-runc-k8s.io-b69179aae27e235744cf144d7a7406bd37361716d277d0c729c491bdb162ccaf-runc.mRkJjd.mount: Deactivated successfully. Oct 9 01:08:58.961415 systemd[1]: run-containerd-runc-k8s.io-dbfbaf265f6a8afcaefd13cff0f8278c487ff9ac711203a1c2ca412596cb9969-runc.XGDt0f.mount: Deactivated successfully. Oct 9 01:09:10.068257 systemd[1]: Started sshd@8-49.13.59.7:22-139.178.68.195:56316.service - OpenSSH per-connection server daemon (139.178.68.195:56316). Oct 9 01:09:11.123533 sshd[5561]: Accepted publickey for core from 139.178.68.195 port 56316 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:11.126710 sshd[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:11.136204 systemd-logind[1608]: New session 8 of user core. Oct 9 01:09:11.142278 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 9 01:09:12.235344 sshd[5561]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:12.238344 systemd[1]: sshd@8-49.13.59.7:22-139.178.68.195:56316.service: Deactivated successfully. Oct 9 01:09:12.243816 systemd[1]: session-8.scope: Deactivated successfully. Oct 9 01:09:12.244159 systemd-logind[1608]: Session 8 logged out. Waiting for processes to exit. Oct 9 01:09:12.245469 systemd-logind[1608]: Removed session 8. Oct 9 01:09:17.403227 systemd[1]: Started sshd@9-49.13.59.7:22-139.178.68.195:33300.service - OpenSSH per-connection server daemon (139.178.68.195:33300). Oct 9 01:09:18.398145 sshd[5586]: Accepted publickey for core from 139.178.68.195 port 33300 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:18.400399 sshd[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:18.406005 systemd-logind[1608]: New session 9 of user core. Oct 9 01:09:18.413288 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 9 01:09:19.166262 sshd[5586]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:19.170459 systemd[1]: sshd@9-49.13.59.7:22-139.178.68.195:33300.service: Deactivated successfully. Oct 9 01:09:19.173314 systemd[1]: session-9.scope: Deactivated successfully. Oct 9 01:09:19.173809 systemd-logind[1608]: Session 9 logged out. Waiting for processes to exit. Oct 9 01:09:19.175464 systemd-logind[1608]: Removed session 9. Oct 9 01:09:19.333198 systemd[1]: Started sshd@10-49.13.59.7:22-139.178.68.195:33312.service - OpenSSH per-connection server daemon (139.178.68.195:33312). Oct 9 01:09:20.322350 sshd[5602]: Accepted publickey for core from 139.178.68.195 port 33312 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:20.324148 sshd[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:20.329284 systemd-logind[1608]: New session 10 of user core. Oct 9 01:09:20.334375 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 9 01:09:21.098608 sshd[5602]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:21.102209 systemd[1]: sshd@10-49.13.59.7:22-139.178.68.195:33312.service: Deactivated successfully. Oct 9 01:09:21.107043 systemd-logind[1608]: Session 10 logged out. Waiting for processes to exit. Oct 9 01:09:21.107477 systemd[1]: session-10.scope: Deactivated successfully. Oct 9 01:09:21.109133 systemd-logind[1608]: Removed session 10. Oct 9 01:09:21.266461 systemd[1]: Started sshd@11-49.13.59.7:22-139.178.68.195:59840.service - OpenSSH per-connection server daemon (139.178.68.195:59840). Oct 9 01:09:22.257192 sshd[5614]: Accepted publickey for core from 139.178.68.195 port 59840 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:22.259013 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:22.264148 systemd-logind[1608]: New session 11 of user core. Oct 9 01:09:22.269292 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 9 01:09:23.047047 sshd[5614]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:23.054679 systemd[1]: sshd@11-49.13.59.7:22-139.178.68.195:59840.service: Deactivated successfully. Oct 9 01:09:23.060136 systemd[1]: session-11.scope: Deactivated successfully. Oct 9 01:09:23.061636 systemd-logind[1608]: Session 11 logged out. Waiting for processes to exit. Oct 9 01:09:23.063649 systemd-logind[1608]: Removed session 11. Oct 9 01:09:28.215347 systemd[1]: Started sshd@12-49.13.59.7:22-139.178.68.195:59846.service - OpenSSH per-connection server daemon (139.178.68.195:59846). Oct 9 01:09:29.237744 sshd[5657]: Accepted publickey for core from 139.178.68.195 port 59846 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:29.240187 sshd[5657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:29.246046 systemd-logind[1608]: New session 12 of user core. Oct 9 01:09:29.250327 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 9 01:09:30.027739 sshd[5657]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:30.032758 systemd[1]: sshd@12-49.13.59.7:22-139.178.68.195:59846.service: Deactivated successfully. Oct 9 01:09:30.033343 systemd-logind[1608]: Session 12 logged out. Waiting for processes to exit. Oct 9 01:09:30.035715 systemd[1]: session-12.scope: Deactivated successfully. Oct 9 01:09:30.037509 systemd-logind[1608]: Removed session 12. Oct 9 01:09:30.198166 systemd[1]: Started sshd@13-49.13.59.7:22-139.178.68.195:59848.service - OpenSSH per-connection server daemon (139.178.68.195:59848). Oct 9 01:09:31.208371 sshd[5712]: Accepted publickey for core from 139.178.68.195 port 59848 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:31.210145 sshd[5712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:31.215863 systemd-logind[1608]: New session 13 of user core. Oct 9 01:09:31.220283 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 9 01:09:32.218643 sshd[5712]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:32.225601 systemd[1]: sshd@13-49.13.59.7:22-139.178.68.195:59848.service: Deactivated successfully. Oct 9 01:09:32.229343 systemd-logind[1608]: Session 13 logged out. Waiting for processes to exit. Oct 9 01:09:32.229873 systemd[1]: session-13.scope: Deactivated successfully. Oct 9 01:09:32.231530 systemd-logind[1608]: Removed session 13. Oct 9 01:09:32.379439 systemd[1]: Started sshd@14-49.13.59.7:22-139.178.68.195:58904.service - OpenSSH per-connection server daemon (139.178.68.195:58904). Oct 9 01:09:33.385293 sshd[5728]: Accepted publickey for core from 139.178.68.195 port 58904 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:33.387502 sshd[5728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:33.398635 systemd-logind[1608]: New session 14 of user core. Oct 9 01:09:33.403780 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 9 01:09:35.724241 sshd[5728]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:35.731168 systemd[1]: sshd@14-49.13.59.7:22-139.178.68.195:58904.service: Deactivated successfully. Oct 9 01:09:35.750230 systemd-logind[1608]: Session 14 logged out. Waiting for processes to exit. Oct 9 01:09:35.751303 systemd[1]: session-14.scope: Deactivated successfully. Oct 9 01:09:35.755608 systemd-logind[1608]: Removed session 14. Oct 9 01:09:35.893781 systemd[1]: Started sshd@15-49.13.59.7:22-139.178.68.195:58908.service - OpenSSH per-connection server daemon (139.178.68.195:58908). Oct 9 01:09:36.895569 sshd[5747]: Accepted publickey for core from 139.178.68.195 port 58908 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:36.897926 sshd[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:36.904088 systemd-logind[1608]: New session 15 of user core. Oct 9 01:09:36.909524 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 9 01:09:37.801825 sshd[5747]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:37.805262 systemd[1]: sshd@15-49.13.59.7:22-139.178.68.195:58908.service: Deactivated successfully. Oct 9 01:09:37.809612 systemd-logind[1608]: Session 15 logged out. Waiting for processes to exit. Oct 9 01:09:37.810223 systemd[1]: session-15.scope: Deactivated successfully. Oct 9 01:09:37.811860 systemd-logind[1608]: Removed session 15. Oct 9 01:09:37.969784 systemd[1]: Started sshd@16-49.13.59.7:22-139.178.68.195:58924.service - OpenSSH per-connection server daemon (139.178.68.195:58924). Oct 9 01:09:38.959540 sshd[5759]: Accepted publickey for core from 139.178.68.195 port 58924 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:38.962650 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:38.974945 systemd-logind[1608]: New session 16 of user core. Oct 9 01:09:38.985504 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 9 01:09:39.706201 sshd[5759]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:39.712924 systemd[1]: sshd@16-49.13.59.7:22-139.178.68.195:58924.service: Deactivated successfully. Oct 9 01:09:39.716422 systemd[1]: session-16.scope: Deactivated successfully. Oct 9 01:09:39.718832 systemd-logind[1608]: Session 16 logged out. Waiting for processes to exit. Oct 9 01:09:39.720107 systemd-logind[1608]: Removed session 16. Oct 9 01:09:44.875527 systemd[1]: Started sshd@17-49.13.59.7:22-139.178.68.195:33568.service - OpenSSH per-connection server daemon (139.178.68.195:33568). Oct 9 01:09:45.875969 sshd[5783]: Accepted publickey for core from 139.178.68.195 port 33568 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:45.878657 sshd[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:45.886624 systemd-logind[1608]: New session 17 of user core. Oct 9 01:09:45.891253 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 9 01:09:46.659636 sshd[5783]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:46.663126 systemd[1]: sshd@17-49.13.59.7:22-139.178.68.195:33568.service: Deactivated successfully. Oct 9 01:09:46.668803 systemd[1]: session-17.scope: Deactivated successfully. Oct 9 01:09:46.669138 systemd-logind[1608]: Session 17 logged out. Waiting for processes to exit. Oct 9 01:09:46.671597 systemd-logind[1608]: Removed session 17. Oct 9 01:09:51.826320 systemd[1]: Started sshd@18-49.13.59.7:22-139.178.68.195:41660.service - OpenSSH per-connection server daemon (139.178.68.195:41660). Oct 9 01:09:52.803395 systemd[1]: run-containerd-runc-k8s.io-b69179aae27e235744cf144d7a7406bd37361716d277d0c729c491bdb162ccaf-runc.dF4Q9l.mount: Deactivated successfully. Oct 9 01:09:52.852031 sshd[5803]: Accepted publickey for core from 139.178.68.195 port 41660 ssh2: RSA SHA256:7D5yTAA09OouO5gOA4zhXVezbm3wMyaEqlomw3tDS6M Oct 9 01:09:52.854833 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 9 01:09:52.860639 systemd-logind[1608]: New session 18 of user core. Oct 9 01:09:52.864359 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 9 01:09:53.610560 sshd[5803]: pam_unix(sshd:session): session closed for user core Oct 9 01:09:53.614792 systemd[1]: sshd@18-49.13.59.7:22-139.178.68.195:41660.service: Deactivated successfully. Oct 9 01:09:53.618921 systemd-logind[1608]: Session 18 logged out. Waiting for processes to exit. Oct 9 01:09:53.619202 systemd[1]: session-18.scope: Deactivated successfully. Oct 9 01:09:53.620874 systemd-logind[1608]: Removed session 18.