Oct 8 19:40:33.967248 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 19:40:33.967273 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:40:33.967284 kernel: KASLR enabled Oct 8 19:40:33.967290 kernel: efi: EFI v2.7 by EDK II Oct 8 19:40:33.967295 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x1347a1018 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Oct 8 19:40:33.967309 kernel: random: crng init done Oct 8 19:40:33.967318 kernel: ACPI: Early table checksum verification disabled Oct 8 19:40:33.967324 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Oct 8 19:40:33.967337 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 8 19:40:33.967344 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967359 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967365 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967371 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967377 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967392 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967402 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967409 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967415 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 19:40:33.967422 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 19:40:33.967428 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 8 19:40:33.967434 kernel: NUMA: Failed to initialise from firmware Oct 8 19:40:33.967441 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:40:33.967447 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Oct 8 19:40:33.967454 kernel: Zone ranges: Oct 8 19:40:33.967460 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:40:33.967467 kernel: DMA32 empty Oct 8 19:40:33.967475 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 8 19:40:33.967481 kernel: Movable zone start for each node Oct 8 19:40:33.967487 kernel: Early memory node ranges Oct 8 19:40:33.967494 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Oct 8 19:40:33.967501 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Oct 8 19:40:33.967507 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Oct 8 19:40:33.967514 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Oct 8 19:40:33.967520 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Oct 8 19:40:33.967527 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 19:40:33.967533 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 8 19:40:33.967540 kernel: psci: probing for conduit method from ACPI. Oct 8 19:40:33.967548 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 19:40:33.967554 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:40:33.967561 kernel: psci: Trusted OS migration not required Oct 8 19:40:33.967570 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:40:33.967577 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 19:40:33.967584 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:40:33.967592 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:40:33.967599 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:40:33.967606 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:40:33.967613 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:40:33.967619 kernel: CPU features: detected: Hardware dirty bit management Oct 8 19:40:33.967626 kernel: CPU features: detected: Spectre-v4 Oct 8 19:40:33.967633 kernel: CPU features: detected: Spectre-BHB Oct 8 19:40:33.967639 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 19:40:33.967646 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 19:40:33.967653 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 19:40:33.967660 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 19:40:33.967668 kernel: alternatives: applying boot alternatives Oct 8 19:40:33.967676 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:40:33.967684 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:40:33.967691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:40:33.967698 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:40:33.967704 kernel: Fallback order for Node 0: 0 Oct 8 19:40:33.967711 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Oct 8 19:40:33.967718 kernel: Policy zone: Normal Oct 8 19:40:33.967725 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:40:33.967732 kernel: software IO TLB: area num 2. Oct 8 19:40:33.967738 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Oct 8 19:40:33.967748 kernel: Memory: 3881848K/4096000K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 214152K reserved, 0K cma-reserved) Oct 8 19:40:33.969800 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:40:33.969812 kernel: trace event string verifier disabled Oct 8 19:40:33.969819 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:40:33.969827 kernel: rcu: RCU event tracing is enabled. Oct 8 19:40:33.969834 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:40:33.969841 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:40:33.969849 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:40:33.969856 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:40:33.969863 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:40:33.969870 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:40:33.969883 kernel: GICv3: 256 SPIs implemented Oct 8 19:40:33.969890 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:40:33.969897 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:40:33.969904 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 19:40:33.969911 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 19:40:33.969918 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 19:40:33.969925 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:40:33.969932 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:40:33.969939 kernel: GICv3: using LPI property table @0x00000001000e0000 Oct 8 19:40:33.969946 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Oct 8 19:40:33.969991 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:40:33.970003 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:40:33.970010 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 19:40:33.970017 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 19:40:33.970024 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 19:40:33.970031 kernel: Console: colour dummy device 80x25 Oct 8 19:40:33.970038 kernel: ACPI: Core revision 20230628 Oct 8 19:40:33.970045 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 19:40:33.970053 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:40:33.970060 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:40:33.970067 kernel: SELinux: Initializing. Oct 8 19:40:33.970075 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:40:33.970083 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:40:33.970090 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:40:33.970097 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:40:33.970104 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:40:33.970111 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:40:33.970118 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 19:40:33.970125 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 19:40:33.970132 kernel: Remapping and enabling EFI services. Oct 8 19:40:33.970141 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:40:33.970148 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:40:33.970155 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 19:40:33.970162 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Oct 8 19:40:33.970169 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 19:40:33.970176 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 19:40:33.970183 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:40:33.970190 kernel: SMP: Total of 2 processors activated. Oct 8 19:40:33.970197 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:40:33.970204 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 19:40:33.970213 kernel: CPU features: detected: Common not Private translations Oct 8 19:40:33.970220 kernel: CPU features: detected: CRC32 instructions Oct 8 19:40:33.970233 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 19:40:33.970241 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 19:40:33.970249 kernel: CPU features: detected: LSE atomic instructions Oct 8 19:40:33.970257 kernel: CPU features: detected: Privileged Access Never Oct 8 19:40:33.970264 kernel: CPU features: detected: RAS Extension Support Oct 8 19:40:33.970272 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 19:40:33.970279 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:40:33.970288 kernel: alternatives: applying system-wide alternatives Oct 8 19:40:33.970296 kernel: devtmpfs: initialized Oct 8 19:40:33.970303 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:40:33.970310 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:40:33.970318 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:40:33.970325 kernel: SMBIOS 3.0.0 present. Oct 8 19:40:33.970332 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 8 19:40:33.970342 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:40:33.970349 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:40:33.970357 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:40:33.970364 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:40:33.970372 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:40:33.970379 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Oct 8 19:40:33.970387 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:40:33.970394 kernel: cpuidle: using governor menu Oct 8 19:40:33.970402 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:40:33.970417 kernel: ASID allocator initialised with 32768 entries Oct 8 19:40:33.970429 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:40:33.970438 kernel: Serial: AMBA PL011 UART driver Oct 8 19:40:33.970447 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 19:40:33.970455 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 19:40:33.970463 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:40:33.970470 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:40:33.970477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:40:33.970485 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:40:33.970494 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:40:33.970502 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:40:33.970509 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:40:33.970517 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:40:33.970524 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:40:33.970531 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:40:33.970539 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:40:33.970547 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:40:33.970555 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:40:33.970617 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:40:33.970626 kernel: ACPI: Interpreter enabled Oct 8 19:40:33.970634 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:40:33.970641 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:40:33.970649 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 19:40:33.970656 kernel: printk: console [ttyAMA0] enabled Oct 8 19:40:33.970663 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 19:40:33.970852 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:40:33.970936 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:40:33.971002 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:40:33.971065 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 19:40:33.971128 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 19:40:33.971138 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 19:40:33.971145 kernel: PCI host bridge to bus 0000:00 Oct 8 19:40:33.971217 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 19:40:33.971277 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:40:33.971339 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 19:40:33.971396 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 19:40:33.971479 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 19:40:33.971563 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Oct 8 19:40:33.971682 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Oct 8 19:40:33.971946 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:40:33.972055 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.972122 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Oct 8 19:40:33.972212 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.972279 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Oct 8 19:40:33.972353 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.972419 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Oct 8 19:40:33.972494 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.972561 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Oct 8 19:40:33.972634 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.972700 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Oct 8 19:40:33.972801 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.972876 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Oct 8 19:40:33.972996 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.973941 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Oct 8 19:40:33.974040 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.974112 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Oct 8 19:40:33.974258 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 19:40:33.974339 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Oct 8 19:40:33.974424 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Oct 8 19:40:33.974491 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Oct 8 19:40:33.974617 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:40:33.974709 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Oct 8 19:40:33.974804 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:40:33.974912 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:40:33.974996 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 19:40:33.975072 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Oct 8 19:40:33.975149 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 19:40:33.975218 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Oct 8 19:40:33.975284 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Oct 8 19:40:33.975362 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 19:40:33.975432 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Oct 8 19:40:33.975515 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 19:40:33.975587 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Oct 8 19:40:33.975665 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 19:40:33.975734 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Oct 8 19:40:33.975838 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:40:33.975927 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 19:40:33.976000 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Oct 8 19:40:33.976069 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Oct 8 19:40:33.976136 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 19:40:33.976207 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 8 19:40:33.976273 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:40:33.976337 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 8 19:40:33.976408 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 8 19:40:33.976477 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 8 19:40:33.976542 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 8 19:40:33.976612 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 8 19:40:33.976677 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:40:33.976742 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 8 19:40:33.976835 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 8 19:40:33.976957 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 8 19:40:33.977030 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 8 19:40:33.977107 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 8 19:40:33.977173 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 8 19:40:33.977237 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Oct 8 19:40:33.977306 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 8 19:40:33.977371 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:40:33.977436 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 8 19:40:33.977504 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 19:40:33.977572 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:40:33.977637 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 8 19:40:33.977706 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 19:40:33.977791 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:40:33.977869 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 8 19:40:33.977941 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 19:40:33.978007 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:40:33.978073 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 8 19:40:33.978146 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Oct 8 19:40:33.978211 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:40:33.978281 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Oct 8 19:40:33.978346 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:40:33.978413 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Oct 8 19:40:33.978480 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:40:33.978548 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Oct 8 19:40:33.978613 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:40:33.978677 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Oct 8 19:40:33.978742 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:40:33.978993 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Oct 8 19:40:33.979063 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:40:33.979129 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Oct 8 19:40:33.979198 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:40:33.979264 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Oct 8 19:40:33.979327 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:40:33.979392 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Oct 8 19:40:33.979456 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:40:33.979525 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Oct 8 19:40:33.979589 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Oct 8 19:40:33.979657 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Oct 8 19:40:33.979722 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 19:40:33.979807 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Oct 8 19:40:33.979874 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 19:40:33.979941 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Oct 8 19:40:33.980006 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 19:40:33.980071 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Oct 8 19:40:33.980135 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 8 19:40:33.980205 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Oct 8 19:40:33.980271 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 8 19:40:33.980335 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Oct 8 19:40:33.980398 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 8 19:40:33.980461 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Oct 8 19:40:33.980525 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 8 19:40:33.980589 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Oct 8 19:40:33.980654 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 8 19:40:33.982865 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Oct 8 19:40:33.982985 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Oct 8 19:40:33.983059 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Oct 8 19:40:33.983135 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Oct 8 19:40:33.983201 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 19:40:33.983270 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Oct 8 19:40:33.983342 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 19:40:33.983410 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 8 19:40:33.983486 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 8 19:40:33.983550 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:40:33.983714 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Oct 8 19:40:33.983826 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 19:40:33.983964 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 8 19:40:33.984036 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 8 19:40:33.984100 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:40:33.984176 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 19:40:33.984263 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Oct 8 19:40:33.984348 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 19:40:33.984415 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 8 19:40:33.984479 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 8 19:40:33.984549 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:40:33.984622 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 19:40:33.984706 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 19:40:33.986907 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 8 19:40:33.987001 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 8 19:40:33.987068 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:40:33.987144 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Oct 8 19:40:33.987216 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 19:40:33.987372 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 8 19:40:33.987464 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 8 19:40:33.987539 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:40:33.987612 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Oct 8 19:40:33.987773 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Oct 8 19:40:33.987850 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 19:40:33.987916 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 8 19:40:33.987983 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 8 19:40:33.988054 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:40:33.988129 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Oct 8 19:40:33.988200 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Oct 8 19:40:33.988270 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Oct 8 19:40:33.988340 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 19:40:33.988470 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 8 19:40:33.988544 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 8 19:40:33.988611 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:40:33.988706 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 19:40:33.988810 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 8 19:40:33.988880 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 8 19:40:33.988981 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:40:33.990229 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 19:40:33.990320 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 8 19:40:33.990407 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 8 19:40:33.990476 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:40:33.990557 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 19:40:33.990619 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:40:33.990679 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 19:40:33.990826 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 8 19:40:33.990898 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 8 19:40:33.990975 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 19:40:33.991051 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 8 19:40:33.991117 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 8 19:40:33.991197 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 19:40:33.991272 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 8 19:40:33.991334 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 8 19:40:33.991397 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 19:40:33.991467 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 8 19:40:33.991534 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 8 19:40:33.991597 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 19:40:33.991668 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 8 19:40:33.991743 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 8 19:40:33.991840 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 19:40:33.991918 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 8 19:40:33.991980 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 8 19:40:33.992043 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 19:40:33.992113 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 8 19:40:33.992174 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 8 19:40:33.992236 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 19:40:33.992312 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 8 19:40:33.992373 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 8 19:40:33.992438 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 19:40:33.992506 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 8 19:40:33.992568 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 8 19:40:33.992628 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 19:40:33.992638 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:40:33.992648 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:40:33.992656 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:40:33.992664 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:40:33.992672 kernel: iommu: Default domain type: Translated Oct 8 19:40:33.992680 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:40:33.992688 kernel: efivars: Registered efivars operations Oct 8 19:40:33.992695 kernel: vgaarb: loaded Oct 8 19:40:33.992703 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:40:33.992711 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:40:33.992720 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:40:33.992728 kernel: pnp: PnP ACPI init Oct 8 19:40:33.992819 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 19:40:33.992833 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:40:33.992841 kernel: NET: Registered PF_INET protocol family Oct 8 19:40:33.992849 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:40:33.992857 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:40:33.992865 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:40:33.992876 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:40:33.992885 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:40:33.992910 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:40:33.992918 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:40:33.992926 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:40:33.992935 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:40:33.993019 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 8 19:40:33.993031 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:40:33.993039 kernel: kvm [1]: HYP mode not available Oct 8 19:40:33.993050 kernel: Initialise system trusted keyrings Oct 8 19:40:33.993057 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:40:33.993066 kernel: Key type asymmetric registered Oct 8 19:40:33.993074 kernel: Asymmetric key parser 'x509' registered Oct 8 19:40:33.993082 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:40:33.993090 kernel: io scheduler mq-deadline registered Oct 8 19:40:33.993098 kernel: io scheduler kyber registered Oct 8 19:40:33.993105 kernel: io scheduler bfq registered Oct 8 19:40:33.993114 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:40:33.993195 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 8 19:40:33.993290 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 8 19:40:33.993363 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.993433 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 8 19:40:33.993503 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 8 19:40:33.993571 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.993641 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 8 19:40:33.993707 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 8 19:40:33.993851 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.993924 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 8 19:40:33.993989 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 8 19:40:33.994054 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.994136 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 8 19:40:33.994202 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 8 19:40:33.994266 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.994334 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 8 19:40:33.994397 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 8 19:40:33.994461 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.994531 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 8 19:40:33.994595 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 8 19:40:33.994660 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.994728 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 8 19:40:33.994803 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 8 19:40:33.994868 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.994882 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 8 19:40:33.994950 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 8 19:40:33.995015 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 8 19:40:33.995129 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 19:40:33.995143 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:40:33.995151 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:40:33.995159 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:40:33.995233 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Oct 8 19:40:33.995349 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 8 19:40:33.995438 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 8 19:40:33.995452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:40:33.995461 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:40:33.995541 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 8 19:40:33.995553 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 8 19:40:33.995561 kernel: thunder_xcv, ver 1.0 Oct 8 19:40:33.995569 kernel: thunder_bgx, ver 1.0 Oct 8 19:40:33.995581 kernel: nicpf, ver 1.0 Oct 8 19:40:33.995589 kernel: nicvf, ver 1.0 Oct 8 19:40:33.995683 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:40:33.995794 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:40:33 UTC (1728416433) Oct 8 19:40:33.995829 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:40:33.995840 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 19:40:33.995848 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:40:33.995857 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:40:33.995869 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:40:33.995877 kernel: Segment Routing with IPv6 Oct 8 19:40:33.995885 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:40:33.995893 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:40:33.995900 kernel: Key type dns_resolver registered Oct 8 19:40:33.995908 kernel: registered taskstats version 1 Oct 8 19:40:33.995916 kernel: Loading compiled-in X.509 certificates Oct 8 19:40:33.995924 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:40:33.995932 kernel: Key type .fscrypt registered Oct 8 19:40:33.995941 kernel: Key type fscrypt-provisioning registered Oct 8 19:40:33.995950 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:40:33.995957 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:40:33.995965 kernel: ima: No architecture policies found Oct 8 19:40:33.995973 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:40:33.995981 kernel: clk: Disabling unused clocks Oct 8 19:40:33.995990 kernel: Freeing unused kernel memory: 39104K Oct 8 19:40:33.995998 kernel: Run /init as init process Oct 8 19:40:33.996006 kernel: with arguments: Oct 8 19:40:33.996015 kernel: /init Oct 8 19:40:33.996022 kernel: with environment: Oct 8 19:40:33.996037 kernel: HOME=/ Oct 8 19:40:33.996045 kernel: TERM=linux Oct 8 19:40:33.996053 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:40:33.996063 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:40:33.996073 systemd[1]: Detected virtualization kvm. Oct 8 19:40:33.996084 systemd[1]: Detected architecture arm64. Oct 8 19:40:33.996092 systemd[1]: Running in initrd. Oct 8 19:40:33.996100 systemd[1]: No hostname configured, using default hostname. Oct 8 19:40:33.996108 systemd[1]: Hostname set to . Oct 8 19:40:33.996116 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:40:33.996130 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:40:33.996140 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:40:33.996149 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:40:33.996161 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:40:33.996169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:40:33.996177 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:40:33.996186 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:40:33.996196 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:40:33.996205 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:40:33.996213 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:40:33.996223 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:40:33.996231 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:40:33.996240 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:40:33.996248 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:40:33.996256 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:40:33.996264 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:40:33.996273 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:40:33.996281 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:40:33.996289 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:40:33.996299 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:40:33.996307 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:40:33.996316 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:40:33.996324 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:40:33.996332 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:40:33.996340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:40:33.996349 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:40:33.996357 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:40:33.996367 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:40:33.996375 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:40:33.996383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:33.996392 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:40:33.996400 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:40:33.996408 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:40:33.996445 systemd-journald[236]: Collecting audit messages is disabled. Oct 8 19:40:33.996466 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:40:33.996475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:33.996486 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:40:33.996495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:40:33.996504 systemd-journald[236]: Journal started Oct 8 19:40:33.996524 systemd-journald[236]: Runtime Journal (/run/log/journal/23c880f1955941e7bd950c7409e166be) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:40:33.968121 systemd-modules-load[237]: Inserted module 'overlay' Oct 8 19:40:33.998437 systemd-modules-load[237]: Inserted module 'br_netfilter' Oct 8 19:40:33.998996 kernel: Bridge firewalling registered Oct 8 19:40:34.000768 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:40:34.001777 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:40:34.004743 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:40:34.007085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:40:34.016124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:40:34.018723 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:40:34.034823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:40:34.036571 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:34.038685 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:40:34.041439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:40:34.049070 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:40:34.055079 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:40:34.068521 dracut-cmdline[273]: dracut-dracut-053 Oct 8 19:40:34.074011 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:40:34.103572 systemd-resolved[275]: Positive Trust Anchors: Oct 8 19:40:34.103593 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:40:34.103626 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:40:34.109318 systemd-resolved[275]: Defaulting to hostname 'linux'. Oct 8 19:40:34.111573 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:40:34.115320 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:40:34.181847 kernel: SCSI subsystem initialized Oct 8 19:40:34.187816 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:40:34.195877 kernel: iscsi: registered transport (tcp) Oct 8 19:40:34.209828 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:40:34.209993 kernel: QLogic iSCSI HBA Driver Oct 8 19:40:34.263377 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:40:34.279306 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:40:34.301192 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:40:34.301266 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:40:34.301278 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:40:34.356084 kernel: raid6: neonx8 gen() 15558 MB/s Oct 8 19:40:34.372807 kernel: raid6: neonx4 gen() 15395 MB/s Oct 8 19:40:34.389822 kernel: raid6: neonx2 gen() 13078 MB/s Oct 8 19:40:34.406820 kernel: raid6: neonx1 gen() 10385 MB/s Oct 8 19:40:34.423822 kernel: raid6: int64x8 gen() 6884 MB/s Oct 8 19:40:34.440818 kernel: raid6: int64x4 gen() 7250 MB/s Oct 8 19:40:34.457828 kernel: raid6: int64x2 gen() 6065 MB/s Oct 8 19:40:34.474844 kernel: raid6: int64x1 gen() 5003 MB/s Oct 8 19:40:34.474943 kernel: raid6: using algorithm neonx8 gen() 15558 MB/s Oct 8 19:40:34.491822 kernel: raid6: .... xor() 11753 MB/s, rmw enabled Oct 8 19:40:34.491909 kernel: raid6: using neon recovery algorithm Oct 8 19:40:34.496991 kernel: xor: measuring software checksum speed Oct 8 19:40:34.497064 kernel: 8regs : 19788 MB/sec Oct 8 19:40:34.497081 kernel: 32regs : 19026 MB/sec Oct 8 19:40:34.497790 kernel: arm64_neon : 26981 MB/sec Oct 8 19:40:34.497822 kernel: xor: using function: arm64_neon (26981 MB/sec) Oct 8 19:40:34.559826 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:40:34.575159 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:40:34.581176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:40:34.600294 systemd-udevd[457]: Using default interface naming scheme 'v255'. Oct 8 19:40:34.604481 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:40:34.612359 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:40:34.628500 dracut-pre-trigger[461]: rd.md=0: removing MD RAID activation Oct 8 19:40:34.663846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:40:34.672029 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:40:34.725975 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:40:34.741915 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:40:34.761175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:40:34.761993 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:40:34.763498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:40:34.765689 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:40:34.772053 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:40:34.792241 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:40:34.827666 kernel: scsi host0: Virtio SCSI HBA Oct 8 19:40:34.872553 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 19:40:34.885743 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 19:40:34.891485 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:40:34.892079 kernel: ACPI: bus type USB registered Oct 8 19:40:34.891616 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:34.893188 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:40:34.895870 kernel: usbcore: registered new interface driver usbfs Oct 8 19:40:34.895896 kernel: usbcore: registered new interface driver hub Oct 8 19:40:34.895906 kernel: usbcore: registered new device driver usb Oct 8 19:40:34.893699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:40:34.893847 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:34.896361 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:34.905031 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:34.934385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:34.943790 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 8 19:40:34.946801 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:40:34.947040 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 19:40:34.947911 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 8 19:40:34.948092 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 19:40:34.948313 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:40:34.953830 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 19:40:34.954018 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 8 19:40:34.960963 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 19:40:34.963236 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 19:40:34.963335 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 19:40:34.963416 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 8 19:40:34.963523 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 19:40:34.963603 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 8 19:40:34.963683 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 8 19:40:34.964773 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 19:40:34.967856 kernel: hub 1-0:1.0: USB hub found Oct 8 19:40:34.968438 kernel: hub 1-0:1.0: 4 ports detected Oct 8 19:40:34.971459 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 19:40:34.971678 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:40:34.971691 kernel: GPT:17805311 != 80003071 Oct 8 19:40:34.971701 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:40:34.971711 kernel: GPT:17805311 != 80003071 Oct 8 19:40:34.971720 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:40:34.971730 kernel: hub 2-0:1.0: USB hub found Oct 8 19:40:34.971860 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:34.971873 kernel: hub 2-0:1.0: 4 ports detected Oct 8 19:40:34.973782 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 8 19:40:34.987212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:35.018791 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (515) Oct 8 19:40:35.024316 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (505) Oct 8 19:40:35.023664 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 19:40:35.039656 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 19:40:35.046934 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:40:35.052703 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 19:40:35.054570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 19:40:35.062052 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:40:35.074771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:35.074848 disk-uuid[575]: Primary Header is updated. Oct 8 19:40:35.074848 disk-uuid[575]: Secondary Entries is updated. Oct 8 19:40:35.074848 disk-uuid[575]: Secondary Header is updated. Oct 8 19:40:35.214637 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 19:40:35.354238 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 8 19:40:35.354332 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 19:40:35.355103 kernel: usbcore: registered new interface driver usbhid Oct 8 19:40:35.355151 kernel: usbhid: USB HID core driver Oct 8 19:40:35.457908 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 8 19:40:35.590801 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 8 19:40:35.643798 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 8 19:40:36.098796 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 19:40:36.102210 disk-uuid[576]: The operation has completed successfully. Oct 8 19:40:36.151563 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:40:36.151667 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:40:36.166971 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:40:36.186847 sh[593]: Success Oct 8 19:40:36.204849 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:40:36.271662 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:40:36.281924 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:40:36.285836 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:40:36.304950 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:40:36.305034 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:36.305059 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:40:36.305082 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:40:36.306230 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:40:36.316847 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:40:36.319005 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:40:36.320554 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:40:36.327955 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:40:36.331029 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:40:36.344774 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:36.344831 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:36.344844 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:40:36.349210 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:40:36.349284 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:40:36.363058 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:40:36.363839 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:36.370642 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:40:36.381582 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:40:36.498088 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:40:36.508043 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:40:36.511853 ignition[675]: Ignition 2.18.0 Oct 8 19:40:36.512484 ignition[675]: Stage: fetch-offline Oct 8 19:40:36.512992 ignition[675]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:36.513017 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:36.514178 ignition[675]: parsed url from cmdline: "" Oct 8 19:40:36.514185 ignition[675]: no config URL provided Oct 8 19:40:36.514192 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:40:36.514216 ignition[675]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:40:36.514222 ignition[675]: failed to fetch config: resource requires networking Oct 8 19:40:36.514565 ignition[675]: Ignition finished successfully Oct 8 19:40:36.517514 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:40:36.534423 systemd-networkd[780]: lo: Link UP Oct 8 19:40:36.534439 systemd-networkd[780]: lo: Gained carrier Oct 8 19:40:36.536083 systemd-networkd[780]: Enumeration completed Oct 8 19:40:36.536505 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:40:36.536820 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:36.536824 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:36.538188 systemd[1]: Reached target network.target - Network. Oct 8 19:40:36.538254 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:36.538257 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:36.538844 systemd-networkd[780]: eth0: Link UP Oct 8 19:40:36.538848 systemd-networkd[780]: eth0: Gained carrier Oct 8 19:40:36.538855 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:36.547699 systemd-networkd[780]: eth1: Link UP Oct 8 19:40:36.547703 systemd-networkd[780]: eth1: Gained carrier Oct 8 19:40:36.547714 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:36.554044 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:40:36.570439 ignition[783]: Ignition 2.18.0 Oct 8 19:40:36.570451 ignition[783]: Stage: fetch Oct 8 19:40:36.570692 ignition[783]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:36.570703 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:36.570824 ignition[783]: parsed url from cmdline: "" Oct 8 19:40:36.570828 ignition[783]: no config URL provided Oct 8 19:40:36.570833 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:40:36.570841 ignition[783]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:40:36.570861 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 19:40:36.571888 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 19:40:36.585887 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:40:36.685113 systemd-networkd[780]: eth0: DHCPv4 address 49.13.142.189/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:40:36.772078 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 19:40:36.778601 ignition[783]: GET result: OK Oct 8 19:40:36.778827 ignition[783]: parsing config with SHA512: 623f548b4c32db05e6cddc7c6d675df7207e1a2657f90b4e8c89c6c15ec3025345e4b5f5746c44b2a3f1ae6467e84385cd036d0cf6934328a588e80da528d482 Oct 8 19:40:36.786073 unknown[783]: fetched base config from "system" Oct 8 19:40:36.786100 unknown[783]: fetched base config from "system" Oct 8 19:40:36.786742 ignition[783]: fetch: fetch complete Oct 8 19:40:36.786109 unknown[783]: fetched user config from "hetzner" Oct 8 19:40:36.787070 ignition[783]: fetch: fetch passed Oct 8 19:40:36.787155 ignition[783]: Ignition finished successfully Oct 8 19:40:36.789940 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:40:36.797109 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:40:36.815561 ignition[791]: Ignition 2.18.0 Oct 8 19:40:36.815573 ignition[791]: Stage: kargs Oct 8 19:40:36.815808 ignition[791]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:36.815819 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:36.817273 ignition[791]: kargs: kargs passed Oct 8 19:40:36.817346 ignition[791]: Ignition finished successfully Oct 8 19:40:36.820451 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:40:36.829063 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:40:36.840700 ignition[798]: Ignition 2.18.0 Oct 8 19:40:36.840711 ignition[798]: Stage: disks Oct 8 19:40:36.840983 ignition[798]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:36.840995 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:36.841968 ignition[798]: disks: disks passed Oct 8 19:40:36.842027 ignition[798]: Ignition finished successfully Oct 8 19:40:36.844720 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:40:36.846403 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:40:36.847507 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:40:36.848088 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:40:36.849125 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:40:36.850000 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:40:36.857977 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:40:36.875385 systemd-fsck[807]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 19:40:36.880464 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:40:36.885722 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:40:36.938781 kernel: EXT4-fs (sda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:40:36.939930 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:40:36.941636 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:40:36.954948 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:40:36.958588 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:40:36.963106 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 19:40:36.967376 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:40:36.968613 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:40:36.971934 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:40:36.977857 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (815) Oct 8 19:40:36.977916 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:36.977928 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:36.979030 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:40:36.979242 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:40:36.984934 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:40:36.985030 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:40:36.993258 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:40:37.044819 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:40:37.048833 coreos-metadata[817]: Oct 08 19:40:37.048 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 19:40:37.051484 coreos-metadata[817]: Oct 08 19:40:37.050 INFO Fetch successful Oct 8 19:40:37.051484 coreos-metadata[817]: Oct 08 19:40:37.050 INFO wrote hostname ci-3975-2-2-5-28a2d443fc to /sysroot/etc/hostname Oct 8 19:40:37.053574 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:40:37.056633 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:40:37.058559 initrd-setup-root[857]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:40:37.063353 initrd-setup-root[864]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:40:37.169974 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:40:37.175946 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:40:37.177978 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:40:37.191823 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:37.217850 ignition[932]: INFO : Ignition 2.18.0 Oct 8 19:40:37.217850 ignition[932]: INFO : Stage: mount Oct 8 19:40:37.217850 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:37.217850 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:37.221067 ignition[932]: INFO : mount: mount passed Oct 8 19:40:37.221067 ignition[932]: INFO : Ignition finished successfully Oct 8 19:40:37.219845 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:40:37.230023 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:40:37.231475 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:40:37.303815 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:40:37.312154 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:40:37.333858 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (946) Oct 8 19:40:37.336018 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:40:37.336081 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:40:37.336106 kernel: BTRFS info (device sda6): using free space tree Oct 8 19:40:37.339799 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 19:40:37.339866 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 19:40:37.343981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:40:37.377550 ignition[963]: INFO : Ignition 2.18.0 Oct 8 19:40:37.377550 ignition[963]: INFO : Stage: files Oct 8 19:40:37.378920 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:37.378920 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:37.378920 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:40:37.383885 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:40:37.383885 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:40:37.388814 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:40:37.388814 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:40:37.388814 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:40:37.388814 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:40:37.386798 unknown[963]: wrote ssh authorized keys file for user: core Oct 8 19:40:37.392965 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:40:37.454351 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:40:37.591850 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:40:37.591850 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:40:37.594122 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Oct 8 19:40:37.704975 systemd-networkd[780]: eth0: Gained IPv6LL Oct 8 19:40:38.345094 systemd-networkd[780]: eth1: Gained IPv6LL Oct 8 19:40:38.412646 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:40:38.660205 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Oct 8 19:40:38.660205 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:40:38.663564 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:40:38.663564 ignition[963]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:40:38.663564 ignition[963]: INFO : files: files passed Oct 8 19:40:38.663564 ignition[963]: INFO : Ignition finished successfully Oct 8 19:40:38.666675 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:40:38.676101 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:40:38.680000 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:40:38.685444 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:40:38.686808 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:40:38.696158 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:40:38.696158 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:40:38.698995 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:40:38.701410 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:40:38.703505 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:40:38.710977 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:40:38.756522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:40:38.756728 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:40:38.759422 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:40:38.761604 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:40:38.762901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:40:38.772241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:40:38.788811 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:40:38.795113 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:40:38.808415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:40:38.809924 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:40:38.811144 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:40:38.811778 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:40:38.811914 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:40:38.813774 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:40:38.815080 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:40:38.816104 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:40:38.817292 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:40:38.818566 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:40:38.819848 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:40:38.821262 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:40:38.822475 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:40:38.823710 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:40:38.824867 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:40:38.825982 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:40:38.826118 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:40:38.827328 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:40:38.827932 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:40:38.829010 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:40:38.830881 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:40:38.832319 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:40:38.832459 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:40:38.833928 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:40:38.834064 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:40:38.835366 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:40:38.835476 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:40:38.836273 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 19:40:38.836365 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 19:40:38.848080 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:40:38.848646 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:40:38.848810 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:40:38.861445 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:40:38.862615 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:40:38.866420 ignition[1016]: INFO : Ignition 2.18.0 Oct 8 19:40:38.866420 ignition[1016]: INFO : Stage: umount Oct 8 19:40:38.866420 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:40:38.866420 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 19:40:38.862808 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:40:38.871134 ignition[1016]: INFO : umount: umount passed Oct 8 19:40:38.871134 ignition[1016]: INFO : Ignition finished successfully Oct 8 19:40:38.867119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:40:38.867238 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:40:38.875390 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:40:38.875501 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:40:38.880191 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:40:38.880327 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:40:38.881311 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:40:38.881358 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:40:38.882069 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:40:38.882113 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:40:38.882626 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:40:38.882660 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:40:38.883665 systemd[1]: Stopped target network.target - Network. Oct 8 19:40:38.885376 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:40:38.885440 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:40:38.886047 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:40:38.886466 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:40:38.890345 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:40:38.891341 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:40:38.891768 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:40:38.894094 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:40:38.894145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:40:38.894743 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:40:38.894805 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:40:38.895322 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:40:38.895369 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:40:38.898039 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:40:38.898097 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:40:38.899298 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:40:38.904493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:40:38.907830 systemd-networkd[780]: eth0: DHCPv6 lease lost Oct 8 19:40:38.911008 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:40:38.911892 systemd-networkd[780]: eth1: DHCPv6 lease lost Oct 8 19:40:38.915511 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:40:38.915648 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:40:38.920258 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:40:38.920457 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:40:38.925227 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:40:38.925285 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:40:38.934045 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:40:38.935738 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:40:38.937914 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:40:38.941037 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:40:38.941113 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:40:38.944875 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:40:38.944977 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:40:38.945980 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:40:38.946029 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:40:38.951234 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:40:38.956078 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:40:38.956188 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:40:38.963476 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:40:38.963659 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:40:38.983458 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:40:38.983715 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:40:38.986307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:40:38.986404 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:40:38.987707 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:40:38.987959 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:40:38.989419 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:40:38.989532 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:40:38.991280 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:40:38.991339 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:40:38.993021 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:40:38.993081 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:40:39.003023 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:40:39.003570 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:40:39.003687 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:40:39.004663 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:40:39.004705 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:39.006456 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:40:39.006572 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:40:39.014724 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:40:39.014883 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:40:39.016131 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:40:39.029087 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:40:39.040597 systemd[1]: Switching root. Oct 8 19:40:39.079841 systemd-journald[236]: Journal stopped Oct 8 19:40:40.036073 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Oct 8 19:40:40.036160 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:40:40.036173 kernel: SELinux: policy capability open_perms=1 Oct 8 19:40:40.036186 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:40:40.036196 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:40:40.036205 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:40:40.036219 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:40:40.036228 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:40:40.036237 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:40:40.036247 kernel: audit: type=1403 audit(1728416439.214:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:40:40.036259 systemd[1]: Successfully loaded SELinux policy in 38.301ms. Oct 8 19:40:40.036277 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.986ms. Oct 8 19:40:40.036289 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:40:40.036300 systemd[1]: Detected virtualization kvm. Oct 8 19:40:40.036312 systemd[1]: Detected architecture arm64. Oct 8 19:40:40.036323 systemd[1]: Detected first boot. Oct 8 19:40:40.036333 systemd[1]: Hostname set to . Oct 8 19:40:40.036343 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:40:40.036353 zram_generator::config[1059]: No configuration found. Oct 8 19:40:40.036364 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:40:40.036374 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:40:40.036385 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:40:40.036395 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:40:40.036408 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:40:40.036418 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:40:40.036429 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:40:40.036439 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:40:40.036450 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:40:40.036460 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:40:40.036471 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:40:40.036481 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:40:40.036493 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:40:40.036503 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:40:40.036514 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:40:40.036524 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:40:40.036534 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:40:40.036544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:40:40.036554 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 19:40:40.036565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:40:40.036575 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:40:40.036587 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:40:40.036597 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:40:40.036607 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:40:40.036617 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:40:40.036631 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:40:40.036641 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:40:40.036651 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:40:40.036663 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:40:40.036673 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:40:40.036683 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:40:40.036693 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:40:40.036703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:40:40.036715 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:40:40.036725 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:40:40.036738 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:40:40.036749 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:40:40.039858 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:40:40.039882 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:40:40.039894 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:40:40.039905 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:40:40.039916 systemd[1]: Reached target machines.target - Containers. Oct 8 19:40:40.039926 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:40:40.039937 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:40.039947 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:40:40.039964 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:40:40.039975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:40:40.039985 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:40:40.039999 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:40:40.040012 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:40:40.040027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:40:40.040039 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:40:40.040050 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:40:40.040060 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:40:40.040071 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:40:40.040081 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:40:40.040091 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:40:40.040102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:40:40.040113 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:40:40.040124 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:40:40.040135 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:40:40.040145 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:40:40.040156 systemd[1]: Stopped verity-setup.service. Oct 8 19:40:40.040166 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:40:40.040176 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:40:40.040187 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:40:40.040199 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:40:40.040209 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:40:40.040219 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:40:40.040229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:40:40.040240 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:40:40.040251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:40:40.040261 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:40:40.040274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:40:40.040284 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:40:40.040294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:40:40.040304 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:40:40.040354 systemd-journald[1128]: Collecting audit messages is disabled. Oct 8 19:40:40.040383 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:40:40.040394 kernel: loop: module loaded Oct 8 19:40:40.040406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:40:40.040418 systemd-journald[1128]: Journal started Oct 8 19:40:40.040440 systemd-journald[1128]: Runtime Journal (/run/log/journal/23c880f1955941e7bd950c7409e166be) is 8.0M, max 76.5M, 68.5M free. Oct 8 19:40:39.739818 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:40:39.766949 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 19:40:39.768010 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:40:40.042935 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:40:40.046828 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:40:40.051815 kernel: ACPI: bus type drm_connector registered Oct 8 19:40:40.048704 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:40:40.049679 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:40:40.059782 kernel: fuse: init (API version 7.39) Oct 8 19:40:40.056435 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:40:40.061597 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:40:40.061792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:40:40.062728 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:40:40.062891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:40:40.063951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:40:40.064088 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:40:40.073264 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:40:40.078996 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:40:40.081807 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:40:40.081854 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:40:40.083475 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:40:40.093076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:40:40.098965 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:40:40.099674 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:40.102371 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:40:40.111018 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:40:40.111650 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:40:40.118007 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:40:40.118980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:40:40.121152 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:40:40.125417 systemd-journald[1128]: Time spent on flushing to /var/log/journal/23c880f1955941e7bd950c7409e166be is 24.024ms for 1119 entries. Oct 8 19:40:40.125417 systemd-journald[1128]: System Journal (/var/log/journal/23c880f1955941e7bd950c7409e166be) is 8.0M, max 584.8M, 576.8M free. Oct 8 19:40:40.157890 systemd-journald[1128]: Received client request to flush runtime journal. Oct 8 19:40:40.125646 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:40:40.130866 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:40:40.132466 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:40:40.135317 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:40:40.166054 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:40:40.184829 kernel: loop0: detected capacity change from 0 to 8 Oct 8 19:40:40.188059 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:40:40.192801 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:40:40.193733 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:40:40.201982 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:40:40.203140 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:40:40.207696 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:40:40.215097 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:40:40.230942 kernel: loop1: detected capacity change from 0 to 59688 Oct 8 19:40:40.233790 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:40:40.252933 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:40:40.258444 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:40:40.273091 kernel: loop2: detected capacity change from 0 to 194096 Oct 8 19:40:40.281854 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:40:40.290980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:40:40.315993 kernel: loop3: detected capacity change from 0 to 113672 Oct 8 19:40:40.338607 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 8 19:40:40.338631 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Oct 8 19:40:40.345457 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:40:40.358065 kernel: loop4: detected capacity change from 0 to 8 Oct 8 19:40:40.365043 kernel: loop5: detected capacity change from 0 to 59688 Oct 8 19:40:40.377783 kernel: loop6: detected capacity change from 0 to 194096 Oct 8 19:40:40.407800 kernel: loop7: detected capacity change from 0 to 113672 Oct 8 19:40:40.421107 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 19:40:40.421679 (sd-merge)[1197]: Merged extensions into '/usr'. Oct 8 19:40:40.430674 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:40:40.431005 systemd[1]: Reloading... Oct 8 19:40:40.559857 zram_generator::config[1221]: No configuration found. Oct 8 19:40:40.686826 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:40:40.717842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:40:40.766301 systemd[1]: Reloading finished in 334 ms. Oct 8 19:40:40.818380 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:40:40.819542 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:40:40.835193 systemd[1]: Starting ensure-sysext.service... Oct 8 19:40:40.839517 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:40:40.853261 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:40:40.870109 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:40:40.874455 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:40:40.874482 systemd[1]: Reloading... Oct 8 19:40:40.877179 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:40:40.877606 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:40:40.878305 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:40:40.878507 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Oct 8 19:40:40.878551 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Oct 8 19:40:40.883627 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:40:40.885816 systemd-tmpfiles[1259]: Skipping /boot Oct 8 19:40:40.898801 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:40:40.898962 systemd-tmpfiles[1259]: Skipping /boot Oct 8 19:40:40.924340 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Oct 8 19:40:41.003795 zram_generator::config[1286]: No configuration found. Oct 8 19:40:41.064785 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1293) Oct 8 19:40:41.172420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:40:41.191794 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 19:40:41.251598 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 19:40:41.252061 systemd[1]: Reloading finished in 377 ms. Oct 8 19:40:41.269554 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:40:41.272298 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:40:41.295842 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1307) Oct 8 19:40:41.325702 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 8 19:40:41.326729 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 19:40:41.326785 kernel: [drm] features: -context_init Oct 8 19:40:41.326803 kernel: [drm] number of scanouts: 1 Oct 8 19:40:41.326819 kernel: [drm] number of cap sets: 0 Oct 8 19:40:41.326039 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:40:41.331863 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 19:40:41.337163 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:40:41.342951 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 19:40:41.351900 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 19:40:41.356056 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:40:41.359715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:40:41.366024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:40:41.385486 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:40:41.397251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:41.403168 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:40:41.407477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:40:41.410157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:40:41.410839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:41.414936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:40:41.422357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:41.422744 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:41.427208 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:40:41.431373 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:40:41.432650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:40:41.453335 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:41.455458 systemd[1]: Finished ensure-sysext.service. Oct 8 19:40:41.456557 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:40:41.458736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:40:41.458927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:40:41.460530 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:40:41.476514 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 19:40:41.479217 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:40:41.518408 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 19:40:41.519417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:40:41.519642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:40:41.520963 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:40:41.521112 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:40:41.521976 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:40:41.522106 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:40:41.538292 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:40:41.539134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:40:41.539323 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:40:41.541320 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:40:41.543409 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:40:41.552022 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:40:41.553968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:40:41.557608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:41.571980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:40:41.578858 augenrules[1407]: No rules Oct 8 19:40:41.582126 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:40:41.583181 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:40:41.585016 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:40:41.663588 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:40:41.670952 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:40:41.678184 systemd-networkd[1373]: lo: Link UP Oct 8 19:40:41.678201 systemd-networkd[1373]: lo: Gained carrier Oct 8 19:40:41.682614 systemd-networkd[1373]: Enumeration completed Oct 8 19:40:41.682806 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:40:41.687473 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:41.687489 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:41.688923 systemd-networkd[1373]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:41.688932 systemd-networkd[1373]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:40:41.690260 systemd-networkd[1373]: eth0: Link UP Oct 8 19:40:41.690265 systemd-networkd[1373]: eth0: Gained carrier Oct 8 19:40:41.690285 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:41.691124 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:40:41.697375 systemd-networkd[1373]: eth1: Link UP Oct 8 19:40:41.697390 systemd-networkd[1373]: eth1: Gained carrier Oct 8 19:40:41.697420 systemd-networkd[1373]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:40:41.701803 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:40:41.718391 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 19:40:41.721965 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:40:41.723782 systemd-resolved[1376]: Positive Trust Anchors: Oct 8 19:40:41.723802 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:40:41.723833 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:40:41.733859 systemd-networkd[1373]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 19:40:41.735146 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:40:41.735793 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Oct 8 19:40:41.738981 systemd-resolved[1376]: Using system hostname 'ci-3975-2-2-5-28a2d443fc'. Oct 8 19:40:41.740952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:40:41.744048 systemd[1]: Reached target network.target - Network. Oct 8 19:40:41.744785 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:40:41.775885 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:40:41.778080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:40:41.778882 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:40:41.779675 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:40:41.780592 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:40:41.781814 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:40:41.782672 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:40:41.783611 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:40:41.784501 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:40:41.784549 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:40:41.785388 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:40:41.787335 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:40:41.789384 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:40:41.795129 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:40:41.798160 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:40:41.799684 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:40:41.800572 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:40:41.801255 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:40:41.801887 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:40:41.801925 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:40:41.808047 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:40:41.811506 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:40:41.818104 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:40:41.819959 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:40:41.823952 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:40:41.833214 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:40:41.834855 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:40:41.840738 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:40:41.845562 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:40:41.859995 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:40:41.863847 systemd-networkd[1373]: eth0: DHCPv4 address 49.13.142.189/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 19:40:41.866014 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:40:41.872556 jq[1441]: false Oct 8 19:40:41.872629 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:40:41.876069 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Oct 8 19:40:41.881435 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:40:41.882078 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:40:41.885013 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:40:41.893488 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:40:41.895489 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:40:41.900873 coreos-metadata[1439]: Oct 08 19:40:41.900 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 19:40:41.900873 coreos-metadata[1439]: Oct 08 19:40:41.900 INFO Fetch successful Oct 8 19:40:41.900873 coreos-metadata[1439]: Oct 08 19:40:41.900 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 19:40:41.900873 coreos-metadata[1439]: Oct 08 19:40:41.900 INFO Fetch successful Oct 8 19:40:41.915334 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:40:41.916820 extend-filesystems[1442]: Found loop4 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found loop5 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found loop6 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found loop7 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda1 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda2 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda3 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found usr Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda4 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda6 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda7 Oct 8 19:40:41.916820 extend-filesystems[1442]: Found sda9 Oct 8 19:40:41.916820 extend-filesystems[1442]: Checking size of /dev/sda9 Oct 8 19:40:41.916648 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:40:41.917133 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:40:41.917890 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:40:41.930719 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:40:41.949058 jq[1458]: true Oct 8 19:40:41.932234 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:40:41.990819 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:40:41.996636 extend-filesystems[1442]: Resized partition /dev/sda9 Oct 8 19:40:42.016426 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:40:42.017892 extend-filesystems[1484]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:40:42.017379 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:40:42.023492 update_engine[1452]: I1008 19:40:42.020545 1452 main.cc:92] Flatcar Update Engine starting Oct 8 19:40:42.026464 jq[1477]: true Oct 8 19:40:42.026993 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 19:40:42.037584 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:40:42.037356 dbus-daemon[1440]: [system] SELinux support is enabled Oct 8 19:40:42.042235 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:40:42.042276 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:40:42.044391 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:40:42.044418 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:40:42.054255 tar[1462]: linux-arm64/helm Oct 8 19:40:42.055591 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:40:42.056847 update_engine[1452]: I1008 19:40:42.056568 1452 update_check_scheduler.cc:74] Next update check in 7m23s Oct 8 19:40:42.069130 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:40:42.141170 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1305) Oct 8 19:40:42.157776 systemd-logind[1449]: New seat seat0. Oct 8 19:40:42.168962 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:40:42.168989 systemd-logind[1449]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 8 19:40:42.170280 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:40:42.210819 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 19:40:42.233189 extend-filesystems[1484]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 19:40:42.233189 extend-filesystems[1484]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 19:40:42.233189 extend-filesystems[1484]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 19:40:42.242100 extend-filesystems[1442]: Resized filesystem in /dev/sda9 Oct 8 19:40:42.242100 extend-filesystems[1442]: Found sr0 Oct 8 19:40:42.235958 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:40:42.243055 bash[1508]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:40:42.237854 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:40:42.245514 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:40:42.256181 systemd[1]: Starting sshkeys.service... Oct 8 19:40:42.293993 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:40:42.302216 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:40:42.340746 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:40:42.361461 coreos-metadata[1521]: Oct 08 19:40:42.360 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 19:40:42.365372 coreos-metadata[1521]: Oct 08 19:40:42.365 INFO Fetch successful Oct 8 19:40:42.378889 unknown[1521]: wrote ssh authorized keys file for user: core Oct 8 19:40:42.416255 update-ssh-keys[1525]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:40:42.419819 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:40:42.423329 systemd[1]: Finished sshkeys.service. Oct 8 19:40:42.460769 containerd[1464]: time="2024-10-08T19:40:42.459638520Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:40:42.519370 containerd[1464]: time="2024-10-08T19:40:42.517836240Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:40:42.519370 containerd[1464]: time="2024-10-08T19:40:42.517893120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522063 containerd[1464]: time="2024-10-08T19:40:42.522012320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522063 containerd[1464]: time="2024-10-08T19:40:42.522060760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522329 containerd[1464]: time="2024-10-08T19:40:42.522302480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522329 containerd[1464]: time="2024-10-08T19:40:42.522325840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:40:42.522430 containerd[1464]: time="2024-10-08T19:40:42.522399480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522463 containerd[1464]: time="2024-10-08T19:40:42.522444800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522487 containerd[1464]: time="2024-10-08T19:40:42.522461000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522535 containerd[1464]: time="2024-10-08T19:40:42.522519600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:42.522741 containerd[1464]: time="2024-10-08T19:40:42.522719720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:42.524748 containerd[1464]: time="2024-10-08T19:40:42.522744160Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:40:42.524808 containerd[1464]: time="2024-10-08T19:40:42.524780080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:40:42.524984 containerd[1464]: time="2024-10-08T19:40:42.524960400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:40:42.525011 containerd[1464]: time="2024-10-08T19:40:42.524985200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:40:42.525092 containerd[1464]: time="2024-10-08T19:40:42.525074320Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:40:42.525117 containerd[1464]: time="2024-10-08T19:40:42.525096120Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:40:42.535901 containerd[1464]: time="2024-10-08T19:40:42.535840680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:40:42.535901 containerd[1464]: time="2024-10-08T19:40:42.535904480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:40:42.536022 containerd[1464]: time="2024-10-08T19:40:42.535923760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:40:42.536022 containerd[1464]: time="2024-10-08T19:40:42.535960320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:40:42.536022 containerd[1464]: time="2024-10-08T19:40:42.535975440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:40:42.536022 containerd[1464]: time="2024-10-08T19:40:42.535988640Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:40:42.536022 containerd[1464]: time="2024-10-08T19:40:42.536002720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:40:42.536206 containerd[1464]: time="2024-10-08T19:40:42.536180800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:40:42.536236 containerd[1464]: time="2024-10-08T19:40:42.536205920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:40:42.536236 containerd[1464]: time="2024-10-08T19:40:42.536220840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:40:42.536270 containerd[1464]: time="2024-10-08T19:40:42.536235200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:40:42.536270 containerd[1464]: time="2024-10-08T19:40:42.536249680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536270 containerd[1464]: time="2024-10-08T19:40:42.536265960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536324 containerd[1464]: time="2024-10-08T19:40:42.536281560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536324 containerd[1464]: time="2024-10-08T19:40:42.536294200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536324 containerd[1464]: time="2024-10-08T19:40:42.536308640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536324 containerd[1464]: time="2024-10-08T19:40:42.536321640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536389 containerd[1464]: time="2024-10-08T19:40:42.536335040Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536389 containerd[1464]: time="2024-10-08T19:40:42.536347440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:40:42.536570 containerd[1464]: time="2024-10-08T19:40:42.536465480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:40:42.536777 containerd[1464]: time="2024-10-08T19:40:42.536741760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:40:42.536815 containerd[1464]: time="2024-10-08T19:40:42.536789320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.536815 containerd[1464]: time="2024-10-08T19:40:42.536804200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:40:42.536855 containerd[1464]: time="2024-10-08T19:40:42.536828440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540231200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540289280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540307880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540326320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540343560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540394160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540411560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540428200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540446680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540643960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540669920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540688200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540705160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540722120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.542767 containerd[1464]: time="2024-10-08T19:40:42.540742880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.543284 containerd[1464]: time="2024-10-08T19:40:42.543255000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.543353 containerd[1464]: time="2024-10-08T19:40:42.543338880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:40:42.546848 containerd[1464]: time="2024-10-08T19:40:42.546162440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:40:42.546848 containerd[1464]: time="2024-10-08T19:40:42.546241360Z" level=info msg="Connect containerd service" Oct 8 19:40:42.546848 containerd[1464]: time="2024-10-08T19:40:42.546293600Z" level=info msg="using legacy CRI server" Oct 8 19:40:42.546848 containerd[1464]: time="2024-10-08T19:40:42.546301720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:40:42.546848 containerd[1464]: time="2024-10-08T19:40:42.546468280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:40:42.547500 containerd[1464]: time="2024-10-08T19:40:42.547467840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:40:42.547597 containerd[1464]: time="2024-10-08T19:40:42.547582720Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:40:42.547662 containerd[1464]: time="2024-10-08T19:40:42.547645160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:40:42.547710 containerd[1464]: time="2024-10-08T19:40:42.547698560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:40:42.547779 containerd[1464]: time="2024-10-08T19:40:42.547749040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:40:42.549058 containerd[1464]: time="2024-10-08T19:40:42.547777600Z" level=info msg="Start subscribing containerd event" Oct 8 19:40:42.549123 containerd[1464]: time="2024-10-08T19:40:42.549068160Z" level=info msg="Start recovering state" Oct 8 19:40:42.551774 containerd[1464]: time="2024-10-08T19:40:42.549154200Z" level=info msg="Start event monitor" Oct 8 19:40:42.551774 containerd[1464]: time="2024-10-08T19:40:42.549172400Z" level=info msg="Start snapshots syncer" Oct 8 19:40:42.551774 containerd[1464]: time="2024-10-08T19:40:42.549184800Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:40:42.551774 containerd[1464]: time="2024-10-08T19:40:42.549192040Z" level=info msg="Start streaming server" Oct 8 19:40:42.551774 containerd[1464]: time="2024-10-08T19:40:42.549537120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:40:42.551774 containerd[1464]: time="2024-10-08T19:40:42.549592800Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:40:42.551774 containerd[1464]: time="2024-10-08T19:40:42.549640360Z" level=info msg="containerd successfully booted in 0.094470s" Oct 8 19:40:42.549788 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:40:42.727277 tar[1462]: linux-arm64/LICENSE Oct 8 19:40:42.727368 tar[1462]: linux-arm64/README.md Oct 8 19:40:42.739867 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:40:43.221389 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:40:43.246492 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:40:43.254168 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:40:43.269109 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:40:43.269408 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:40:43.279457 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:40:43.293557 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:40:43.301193 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:40:43.303942 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 19:40:43.305985 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:40:43.400989 systemd-networkd[1373]: eth1: Gained IPv6LL Oct 8 19:40:43.403331 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Oct 8 19:40:43.405017 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:40:43.410056 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:40:43.419049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:40:43.423900 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:40:43.463138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:40:43.593461 systemd-networkd[1373]: eth0: Gained IPv6LL Oct 8 19:40:43.594044 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Oct 8 19:40:44.143871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:40:44.145361 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:40:44.150539 systemd[1]: Startup finished in 810ms (kernel) + 5.495s (initrd) + 4.971s (userspace) = 11.277s. Oct 8 19:40:44.156098 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:40:44.792431 kubelet[1566]: E1008 19:40:44.792297 1566 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:40:44.796548 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:40:44.796849 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:40:54.952946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:40:54.963000 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:40:55.085349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:40:55.087956 (kubelet)[1586]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:40:55.146937 kubelet[1586]: E1008 19:40:55.146873 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:40:55.151694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:40:55.152104 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:05.202404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:41:05.208025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:05.322635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:05.334163 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:05.388638 kubelet[1602]: E1008 19:41:05.388578 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:05.391405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:05.391570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:14.288967 systemd-timesyncd[1394]: Contacted time server 176.9.157.155:123 (2.flatcar.pool.ntp.org). Oct 8 19:41:14.289022 systemd-timesyncd[1394]: Initial clock synchronization to Tue 2024-10-08 19:41:14.288805 UTC. Oct 8 19:41:14.289546 systemd-resolved[1376]: Clock change detected. Flushing caches. Oct 8 19:41:15.939477 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:41:15.950750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:16.059199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:16.070967 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:16.140019 kubelet[1617]: E1008 19:41:16.139951 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:16.144882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:16.145097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:26.189419 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 19:41:26.200731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:26.337139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:26.342337 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:26.398316 kubelet[1633]: E1008 19:41:26.398240 1633 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:26.401047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:26.401176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:27.405465 update_engine[1452]: I1008 19:41:27.405185 1452 update_attempter.cc:509] Updating boot flags... Oct 8 19:41:27.451487 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1650) Oct 8 19:41:27.520606 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1651) Oct 8 19:41:36.439166 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 19:41:36.446700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:36.571184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:36.576741 (kubelet)[1667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:36.627730 kubelet[1667]: E1008 19:41:36.627654 1667 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:36.631521 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:36.631684 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:46.689014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 19:41:46.698739 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:46.818998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:46.833027 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:46.886281 kubelet[1683]: E1008 19:41:46.886217 1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:46.889080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:46.889251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:41:56.939519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 19:41:56.953714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:41:57.109136 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:41:57.118933 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:41:57.180953 kubelet[1699]: E1008 19:41:57.180876 1699 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:41:57.186705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:41:57.187205 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:07.188942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 19:42:07.195661 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:07.336350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:07.351836 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:07.404725 kubelet[1715]: E1008 19:42:07.404609 1715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:07.407210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:07.407408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:17.439287 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 19:42:17.446665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:17.571836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:17.586060 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:17.636135 kubelet[1731]: E1008 19:42:17.636078 1731 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:17.639093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:17.639245 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:27.689409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 19:42:27.699282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:27.818957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:27.829932 (kubelet)[1747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:27.883026 kubelet[1747]: E1008 19:42:27.882961 1747 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:27.885935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:27.886124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:34.582866 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:42:34.588805 systemd[1]: Started sshd@0-49.13.142.189:22-139.178.89.65:57484.service - OpenSSH per-connection server daemon (139.178.89.65:57484). Oct 8 19:42:35.611560 sshd[1756]: Accepted publickey for core from 139.178.89.65 port 57484 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:35.618092 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:35.643802 systemd-logind[1449]: New session 1 of user core. Oct 8 19:42:35.644630 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:42:35.650746 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:42:35.666574 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:42:35.677795 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:42:35.681848 (systemd)[1760]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:35.799116 systemd[1760]: Queued start job for default target default.target. Oct 8 19:42:35.808769 systemd[1760]: Created slice app.slice - User Application Slice. Oct 8 19:42:35.808806 systemd[1760]: Reached target paths.target - Paths. Oct 8 19:42:35.808822 systemd[1760]: Reached target timers.target - Timers. Oct 8 19:42:35.810751 systemd[1760]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:42:35.828558 systemd[1760]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:42:35.828721 systemd[1760]: Reached target sockets.target - Sockets. Oct 8 19:42:35.828742 systemd[1760]: Reached target basic.target - Basic System. Oct 8 19:42:35.828802 systemd[1760]: Reached target default.target - Main User Target. Oct 8 19:42:35.828842 systemd[1760]: Startup finished in 139ms. Oct 8 19:42:35.829195 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:42:35.846171 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:42:36.547975 systemd[1]: Started sshd@1-49.13.142.189:22-139.178.89.65:59046.service - OpenSSH per-connection server daemon (139.178.89.65:59046). Oct 8 19:42:37.524207 sshd[1771]: Accepted publickey for core from 139.178.89.65 port 59046 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:37.526185 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:37.532054 systemd-logind[1449]: New session 2 of user core. Oct 8 19:42:37.537770 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:42:37.939110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 19:42:37.950668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:38.082022 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:38.098039 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:38.152045 kubelet[1783]: E1008 19:42:38.151971 1783 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:38.155519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:38.155855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:38.209846 sshd[1771]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:38.215616 systemd[1]: sshd@1-49.13.142.189:22-139.178.89.65:59046.service: Deactivated successfully. Oct 8 19:42:38.218981 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:42:38.223152 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:42:38.224998 systemd-logind[1449]: Removed session 2. Oct 8 19:42:38.386020 systemd[1]: Started sshd@2-49.13.142.189:22-139.178.89.65:59062.service - OpenSSH per-connection server daemon (139.178.89.65:59062). Oct 8 19:42:39.373851 sshd[1793]: Accepted publickey for core from 139.178.89.65 port 59062 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:39.375812 sshd[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:39.382070 systemd-logind[1449]: New session 3 of user core. Oct 8 19:42:39.388141 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:42:40.063738 sshd[1793]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:40.070072 systemd[1]: sshd@2-49.13.142.189:22-139.178.89.65:59062.service: Deactivated successfully. Oct 8 19:42:40.073420 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:42:40.075975 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:42:40.078508 systemd-logind[1449]: Removed session 3. Oct 8 19:42:40.243261 systemd[1]: Started sshd@3-49.13.142.189:22-139.178.89.65:59070.service - OpenSSH per-connection server daemon (139.178.89.65:59070). Oct 8 19:42:41.212030 sshd[1800]: Accepted publickey for core from 139.178.89.65 port 59070 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:41.213794 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:41.219394 systemd-logind[1449]: New session 4 of user core. Oct 8 19:42:41.229654 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:42:41.884788 sshd[1800]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:41.890454 systemd[1]: sshd@3-49.13.142.189:22-139.178.89.65:59070.service: Deactivated successfully. Oct 8 19:42:41.895644 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:42:41.898051 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:42:41.899330 systemd-logind[1449]: Removed session 4. Oct 8 19:42:42.068774 systemd[1]: Started sshd@4-49.13.142.189:22-139.178.89.65:59084.service - OpenSSH per-connection server daemon (139.178.89.65:59084). Oct 8 19:42:43.045466 sshd[1807]: Accepted publickey for core from 139.178.89.65 port 59084 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:43.048308 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:43.054045 systemd-logind[1449]: New session 5 of user core. Oct 8 19:42:43.059602 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:42:43.592862 sudo[1810]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:42:43.593122 sudo[1810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:43.610110 sudo[1810]: pam_unix(sudo:session): session closed for user root Oct 8 19:42:43.769827 sshd[1807]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:43.774916 systemd[1]: sshd@4-49.13.142.189:22-139.178.89.65:59084.service: Deactivated successfully. Oct 8 19:42:43.777641 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:42:43.779146 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:42:43.780710 systemd-logind[1449]: Removed session 5. Oct 8 19:42:43.939790 systemd[1]: Started sshd@5-49.13.142.189:22-139.178.89.65:59098.service - OpenSSH per-connection server daemon (139.178.89.65:59098). Oct 8 19:42:44.914495 sshd[1815]: Accepted publickey for core from 139.178.89.65 port 59098 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:44.916300 sshd[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:44.924415 systemd-logind[1449]: New session 6 of user core. Oct 8 19:42:44.930615 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:42:45.435824 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:42:45.436194 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:45.440799 sudo[1819]: pam_unix(sudo:session): session closed for user root Oct 8 19:42:45.447052 sudo[1818]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:42:45.447578 sudo[1818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:45.462784 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:42:45.466825 auditctl[1822]: No rules Oct 8 19:42:45.467231 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:42:45.467500 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:42:45.474996 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:42:45.516394 augenrules[1841]: No rules Oct 8 19:42:45.517867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:42:45.521015 sudo[1818]: pam_unix(sudo:session): session closed for user root Oct 8 19:42:45.682983 sshd[1815]: pam_unix(sshd:session): session closed for user core Oct 8 19:42:45.689008 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:42:45.689766 systemd[1]: sshd@5-49.13.142.189:22-139.178.89.65:59098.service: Deactivated successfully. Oct 8 19:42:45.693083 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:42:45.694962 systemd-logind[1449]: Removed session 6. Oct 8 19:42:45.861774 systemd[1]: Started sshd@6-49.13.142.189:22-139.178.89.65:40278.service - OpenSSH per-connection server daemon (139.178.89.65:40278). Oct 8 19:42:46.859085 sshd[1849]: Accepted publickey for core from 139.178.89.65 port 40278 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:42:46.862220 sshd[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:42:46.870519 systemd-logind[1449]: New session 7 of user core. Oct 8 19:42:46.881687 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:42:47.392204 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:42:47.392551 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:42:47.528898 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:42:47.529201 (dockerd)[1861]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:42:47.802403 dockerd[1861]: time="2024-10-08T19:42:47.802203151Z" level=info msg="Starting up" Oct 8 19:42:47.824523 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport129087972-merged.mount: Deactivated successfully. Oct 8 19:42:47.866145 dockerd[1861]: time="2024-10-08T19:42:47.866081389Z" level=info msg="Loading containers: start." Oct 8 19:42:47.995409 kernel: Initializing XFRM netlink socket Oct 8 19:42:48.120560 systemd-networkd[1373]: docker0: Link UP Oct 8 19:42:48.149034 dockerd[1861]: time="2024-10-08T19:42:48.148977061Z" level=info msg="Loading containers: done." Oct 8 19:42:48.188780 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 19:42:48.204663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:48.243582 dockerd[1861]: time="2024-10-08T19:42:48.243532098Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:42:48.244078 dockerd[1861]: time="2024-10-08T19:42:48.244056058Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:42:48.244341 dockerd[1861]: time="2024-10-08T19:42:48.244324618Z" level=info msg="Daemon has completed initialization" Oct 8 19:42:48.304576 dockerd[1861]: time="2024-10-08T19:42:48.304508416Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:42:48.307553 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:42:48.344571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:48.350016 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:48.418737 kubelet[1990]: E1008 19:42:48.418614 1990 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:48.421600 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:48.421743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:49.360248 containerd[1464]: time="2024-10-08T19:42:49.359833826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\"" Oct 8 19:42:50.049383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238172482.mount: Deactivated successfully. Oct 8 19:42:50.992464 containerd[1464]: time="2024-10-08T19:42:50.991413472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:50.993339 containerd[1464]: time="2024-10-08T19:42:50.993294562Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.5: active requests=0, bytes read=29946054" Oct 8 19:42:50.995134 containerd[1464]: time="2024-10-08T19:42:50.995081612Z" level=info msg="ImageCreate event name:\"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:51.000979 containerd[1464]: time="2024-10-08T19:42:51.000881964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:51.004071 containerd[1464]: time="2024-10-08T19:42:51.003709043Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.5\" with image id \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7746ea55ad74e24b8edebb53fb979ffe802e2bc47e3b7a12c8e1b0961d273ed2\", size \"29942762\" in 1.643814576s" Oct 8 19:42:51.004071 containerd[1464]: time="2024-10-08T19:42:51.003794207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.5\" returns image reference \"sha256:2bf7f63bc5e4cb1f93cdd13e325e181862614b805d7cc45282599fb6dd1d329d\"" Oct 8 19:42:51.028709 containerd[1464]: time="2024-10-08T19:42:51.028660360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\"" Oct 8 19:42:52.290829 containerd[1464]: time="2024-10-08T19:42:52.290393536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:52.292591 containerd[1464]: time="2024-10-08T19:42:52.292519318Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.5: active requests=0, bytes read=26885793" Oct 8 19:42:52.293435 containerd[1464]: time="2024-10-08T19:42:52.292927298Z" level=info msg="ImageCreate event name:\"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:52.300422 containerd[1464]: time="2024-10-08T19:42:52.299111836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:52.301044 containerd[1464]: time="2024-10-08T19:42:52.300906442Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.5\" with image id \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bbd15d267294a22a20bf92a77b3ff0e1db7cfb2ce76991da2aaa03d09db3b645\", size \"28373587\" in 1.2721954s" Oct 8 19:42:52.301044 containerd[1464]: time="2024-10-08T19:42:52.300952005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.5\" returns image reference \"sha256:e1be44cf89df192ebc5b44737bf94ac472fe9a0eb3ddf9422d96eed2380ea7e6\"" Oct 8 19:42:52.324105 containerd[1464]: time="2024-10-08T19:42:52.323928912Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\"" Oct 8 19:42:53.304084 containerd[1464]: time="2024-10-08T19:42:53.303997563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:53.306425 containerd[1464]: time="2024-10-08T19:42:53.306006817Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.5: active requests=0, bytes read=16154292" Oct 8 19:42:53.308514 containerd[1464]: time="2024-10-08T19:42:53.308445011Z" level=info msg="ImageCreate event name:\"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:53.313355 containerd[1464]: time="2024-10-08T19:42:53.313252277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:53.315638 containerd[1464]: time="2024-10-08T19:42:53.315070762Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.5\" with image id \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:62c91756a3c9b535ef97655a5bcca05e67e75b578f77fc907d8599a195946ee9\", size \"17642104\" in 991.071966ms" Oct 8 19:42:53.315638 containerd[1464]: time="2024-10-08T19:42:53.315134365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.5\" returns image reference \"sha256:b6db73bf7694d702f3d1cb29dc3e4051df33cc6316cd3636eabbab1e6d26466f\"" Oct 8 19:42:53.350426 containerd[1464]: time="2024-10-08T19:42:53.350267852Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\"" Oct 8 19:42:54.341928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175464353.mount: Deactivated successfully. Oct 8 19:42:54.706387 containerd[1464]: time="2024-10-08T19:42:54.706229870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:54.707958 containerd[1464]: time="2024-10-08T19:42:54.707888426Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.5: active requests=0, bytes read=25648367" Oct 8 19:42:54.708761 containerd[1464]: time="2024-10-08T19:42:54.708709143Z" level=info msg="ImageCreate event name:\"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:54.711446 containerd[1464]: time="2024-10-08T19:42:54.711333343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:54.712473 containerd[1464]: time="2024-10-08T19:42:54.712281466Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.5\" with image id \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\", repo tag \"registry.k8s.io/kube-proxy:v1.30.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:fa20f91153b9e521ed2195d760af6ebf97fd8f5ee54e2164b7e6da6d0651fd13\", size \"25647360\" in 1.361970251s" Oct 8 19:42:54.712473 containerd[1464]: time="2024-10-08T19:42:54.712334388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.5\" returns image reference \"sha256:57f247cd1b5672dc99f46b3e3e288bbc06e9c17dfcfdb6b855cd83af9a418d43\"" Oct 8 19:42:54.736340 containerd[1464]: time="2024-10-08T19:42:54.736236558Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:42:55.369784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803452762.mount: Deactivated successfully. Oct 8 19:42:55.981907 containerd[1464]: time="2024-10-08T19:42:55.981840756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:55.983218 containerd[1464]: time="2024-10-08T19:42:55.983176855Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Oct 8 19:42:55.985637 containerd[1464]: time="2024-10-08T19:42:55.985463516Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:55.989804 containerd[1464]: time="2024-10-08T19:42:55.989739426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:55.991648 containerd[1464]: time="2024-10-08T19:42:55.991533266Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.255251105s" Oct 8 19:42:55.991648 containerd[1464]: time="2024-10-08T19:42:55.991637550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:42:56.015955 containerd[1464]: time="2024-10-08T19:42:56.015880726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:42:56.553456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545312178.mount: Deactivated successfully. Oct 8 19:42:56.563214 containerd[1464]: time="2024-10-08T19:42:56.562799911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:56.564349 containerd[1464]: time="2024-10-08T19:42:56.564263614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Oct 8 19:42:56.565709 containerd[1464]: time="2024-10-08T19:42:56.565623913Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:56.570739 containerd[1464]: time="2024-10-08T19:42:56.570630569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:56.572022 containerd[1464]: time="2024-10-08T19:42:56.571490966Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 555.559757ms" Oct 8 19:42:56.572022 containerd[1464]: time="2024-10-08T19:42:56.571571929Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:42:56.594424 containerd[1464]: time="2024-10-08T19:42:56.594356672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Oct 8 19:42:57.212776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032249743.mount: Deactivated successfully. Oct 8 19:42:58.440109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 19:42:58.449640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:42:58.574239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:42:58.588039 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:42:58.644093 kubelet[2206]: E1008 19:42:58.643223 2206 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:42:58.646229 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:42:58.646412 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:42:58.851943 containerd[1464]: time="2024-10-08T19:42:58.851811719Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:58.853909 containerd[1464]: time="2024-10-08T19:42:58.853664315Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191552" Oct 8 19:42:58.854813 containerd[1464]: time="2024-10-08T19:42:58.854750839Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:58.858808 containerd[1464]: time="2024-10-08T19:42:58.858730401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:42:58.860038 containerd[1464]: time="2024-10-08T19:42:58.859986053Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.265503095s" Oct 8 19:42:58.860038 containerd[1464]: time="2024-10-08T19:42:58.860035255Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Oct 8 19:43:04.146595 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:04.159777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:04.177622 systemd[1]: Reloading requested from client PID 2279 ('systemctl') (unit session-7.scope)... Oct 8 19:43:04.177640 systemd[1]: Reloading... Oct 8 19:43:04.301484 zram_generator::config[2316]: No configuration found. Oct 8 19:43:04.398482 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:43:04.465784 systemd[1]: Reloading finished in 287 ms. Oct 8 19:43:04.522831 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:43:04.522894 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:43:04.523128 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:04.526684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:04.658663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:04.662339 (kubelet)[2365]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:43:04.712036 kubelet[2365]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:04.712036 kubelet[2365]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:43:04.712036 kubelet[2365]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:04.712560 kubelet[2365]: I1008 19:43:04.712130 2365 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:43:05.601467 kubelet[2365]: I1008 19:43:05.601269 2365 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:43:05.601467 kubelet[2365]: I1008 19:43:05.601302 2365 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:43:05.601467 kubelet[2365]: I1008 19:43:05.601539 2365 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:43:05.641078 kubelet[2365]: I1008 19:43:05.640733 2365 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:43:05.641406 kubelet[2365]: E1008 19:43:05.641253 2365 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.142.189:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.652561 kubelet[2365]: I1008 19:43:05.651799 2365 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:43:05.653386 kubelet[2365]: I1008 19:43:05.653273 2365 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:43:05.653582 kubelet[2365]: I1008 19:43:05.653339 2365 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-2-2-5-28a2d443fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:43:05.653686 kubelet[2365]: I1008 19:43:05.653613 2365 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:43:05.653686 kubelet[2365]: I1008 19:43:05.653624 2365 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:43:05.653891 kubelet[2365]: I1008 19:43:05.653841 2365 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:05.655080 kubelet[2365]: I1008 19:43:05.654985 2365 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:43:05.655080 kubelet[2365]: I1008 19:43:05.655013 2365 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:43:05.655276 kubelet[2365]: I1008 19:43:05.655257 2365 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:43:05.655997 kubelet[2365]: I1008 19:43:05.655448 2365 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:43:05.661381 kubelet[2365]: W1008 19:43:05.661306 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.142.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-5-28a2d443fc&limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.661596 kubelet[2365]: E1008 19:43:05.661582 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.142.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-5-28a2d443fc&limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.661825 kubelet[2365]: I1008 19:43:05.661802 2365 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:43:05.662451 kubelet[2365]: I1008 19:43:05.662434 2365 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:43:05.662740 kubelet[2365]: W1008 19:43:05.662723 2365 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:43:05.666006 kubelet[2365]: I1008 19:43:05.665967 2365 server.go:1264] "Started kubelet" Oct 8 19:43:05.676524 kubelet[2365]: W1008 19:43:05.672600 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.142.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.676524 kubelet[2365]: E1008 19:43:05.672674 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.142.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.678394 kubelet[2365]: I1008 19:43:05.678261 2365 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:43:05.678394 kubelet[2365]: I1008 19:43:05.678299 2365 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:43:05.678814 kubelet[2365]: I1008 19:43:05.678700 2365 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:43:05.681413 kubelet[2365]: E1008 19:43:05.679346 2365 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.142.189:6443/api/v1/namespaces/default/events\": dial tcp 49.13.142.189:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-2-2-5-28a2d443fc.17fc91ba2482d9ba default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-5-28a2d443fc,UID:ci-3975-2-2-5-28a2d443fc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-5-28a2d443fc,},FirstTimestamp:2024-10-08 19:43:05.66592761 +0000 UTC m=+0.999744152,LastTimestamp:2024-10-08 19:43:05.66592761 +0000 UTC m=+0.999744152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-5-28a2d443fc,}" Oct 8 19:43:05.686411 kubelet[2365]: I1008 19:43:05.684894 2365 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:43:05.686411 kubelet[2365]: I1008 19:43:05.685968 2365 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:43:05.688374 kubelet[2365]: I1008 19:43:05.688324 2365 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:43:05.689200 kubelet[2365]: E1008 19:43:05.689163 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.142.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-5-28a2d443fc?timeout=10s\": dial tcp 49.13.142.189:6443: connect: connection refused" interval="200ms" Oct 8 19:43:05.690038 kubelet[2365]: I1008 19:43:05.690021 2365 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:43:05.690871 kubelet[2365]: W1008 19:43:05.690824 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.142.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.691812 kubelet[2365]: E1008 19:43:05.691485 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.142.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.692005 kubelet[2365]: I1008 19:43:05.691975 2365 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:43:05.692203 kubelet[2365]: I1008 19:43:05.692171 2365 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:43:05.694584 kubelet[2365]: I1008 19:43:05.694552 2365 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:43:05.694750 kubelet[2365]: E1008 19:43:05.694714 2365 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:43:05.694947 kubelet[2365]: I1008 19:43:05.694921 2365 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:43:05.707794 kubelet[2365]: I1008 19:43:05.707738 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:43:05.709247 kubelet[2365]: I1008 19:43:05.709215 2365 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:43:05.709570 kubelet[2365]: I1008 19:43:05.709556 2365 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:43:05.709651 kubelet[2365]: I1008 19:43:05.709642 2365 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:43:05.709753 kubelet[2365]: E1008 19:43:05.709733 2365 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:43:05.718704 kubelet[2365]: W1008 19:43:05.718631 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.142.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.718704 kubelet[2365]: E1008 19:43:05.718701 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.142.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:05.731812 kubelet[2365]: I1008 19:43:05.731722 2365 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:43:05.731812 kubelet[2365]: I1008 19:43:05.731804 2365 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:43:05.731997 kubelet[2365]: I1008 19:43:05.731832 2365 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:05.734724 kubelet[2365]: I1008 19:43:05.734672 2365 policy_none.go:49] "None policy: Start" Oct 8 19:43:05.735611 kubelet[2365]: I1008 19:43:05.735592 2365 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:43:05.736202 kubelet[2365]: I1008 19:43:05.735763 2365 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:43:05.743413 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:43:05.760186 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:43:05.766160 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:43:05.775413 kubelet[2365]: I1008 19:43:05.775349 2365 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:43:05.775690 kubelet[2365]: I1008 19:43:05.775630 2365 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:43:05.775766 kubelet[2365]: I1008 19:43:05.775754 2365 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:43:05.778685 kubelet[2365]: E1008 19:43:05.778637 2365 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-2-2-5-28a2d443fc\" not found" Oct 8 19:43:05.792544 kubelet[2365]: I1008 19:43:05.792478 2365 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.794874 kubelet[2365]: E1008 19:43:05.794828 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.142.189:6443/api/v1/nodes\": dial tcp 49.13.142.189:6443: connect: connection refused" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.810035 kubelet[2365]: I1008 19:43:05.809976 2365 topology_manager.go:215] "Topology Admit Handler" podUID="f4e917432488e7e0a7b639fba3454fed" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.814511 kubelet[2365]: I1008 19:43:05.814258 2365 topology_manager.go:215] "Topology Admit Handler" podUID="244bde55cb325834d8317ac5dcbaa5dd" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.817781 kubelet[2365]: I1008 19:43:05.817578 2365 topology_manager.go:215] "Topology Admit Handler" podUID="23cef816f5ee6340f72279a9dad25836" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.826887 systemd[1]: Created slice kubepods-burstable-podf4e917432488e7e0a7b639fba3454fed.slice - libcontainer container kubepods-burstable-podf4e917432488e7e0a7b639fba3454fed.slice. Oct 8 19:43:05.840511 systemd[1]: Created slice kubepods-burstable-pod244bde55cb325834d8317ac5dcbaa5dd.slice - libcontainer container kubepods-burstable-pod244bde55cb325834d8317ac5dcbaa5dd.slice. Oct 8 19:43:05.852218 systemd[1]: Created slice kubepods-burstable-pod23cef816f5ee6340f72279a9dad25836.slice - libcontainer container kubepods-burstable-pod23cef816f5ee6340f72279a9dad25836.slice. Oct 8 19:43:05.891009 kubelet[2365]: E1008 19:43:05.890851 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.142.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-5-28a2d443fc?timeout=10s\": dial tcp 49.13.142.189:6443: connect: connection refused" interval="400ms" Oct 8 19:43:05.895977 kubelet[2365]: I1008 19:43:05.895605 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.895977 kubelet[2365]: I1008 19:43:05.895648 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.895977 kubelet[2365]: I1008 19:43:05.895669 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e917432488e7e0a7b639fba3454fed-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-5-28a2d443fc\" (UID: \"f4e917432488e7e0a7b639fba3454fed\") " pod="kube-system/kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.895977 kubelet[2365]: I1008 19:43:05.895687 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e917432488e7e0a7b639fba3454fed-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-5-28a2d443fc\" (UID: \"f4e917432488e7e0a7b639fba3454fed\") " pod="kube-system/kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.895977 kubelet[2365]: I1008 19:43:05.895707 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e917432488e7e0a7b639fba3454fed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-5-28a2d443fc\" (UID: \"f4e917432488e7e0a7b639fba3454fed\") " pod="kube-system/kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.896212 kubelet[2365]: I1008 19:43:05.895746 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23cef816f5ee6340f72279a9dad25836-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-5-28a2d443fc\" (UID: \"23cef816f5ee6340f72279a9dad25836\") " pod="kube-system/kube-scheduler-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.896212 kubelet[2365]: I1008 19:43:05.895763 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.896212 kubelet[2365]: I1008 19:43:05.895778 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.896212 kubelet[2365]: I1008 19:43:05.895795 2365 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.998707 kubelet[2365]: I1008 19:43:05.998637 2365 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:05.999244 kubelet[2365]: E1008 19:43:05.999186 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.142.189:6443/api/v1/nodes\": dial tcp 49.13.142.189:6443: connect: connection refused" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:06.138145 containerd[1464]: time="2024-10-08T19:43:06.137983873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-5-28a2d443fc,Uid:f4e917432488e7e0a7b639fba3454fed,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:06.151108 containerd[1464]: time="2024-10-08T19:43:06.150779414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-5-28a2d443fc,Uid:244bde55cb325834d8317ac5dcbaa5dd,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:06.159445 containerd[1464]: time="2024-10-08T19:43:06.159391698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-5-28a2d443fc,Uid:23cef816f5ee6340f72279a9dad25836,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:06.291493 kubelet[2365]: E1008 19:43:06.291345 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.142.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-5-28a2d443fc?timeout=10s\": dial tcp 49.13.142.189:6443: connect: connection refused" interval="800ms" Oct 8 19:43:06.403383 kubelet[2365]: I1008 19:43:06.403201 2365 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:06.405411 kubelet[2365]: E1008 19:43:06.404866 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.142.189:6443/api/v1/nodes\": dial tcp 49.13.142.189:6443: connect: connection refused" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:06.523538 kubelet[2365]: W1008 19:43:06.523398 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.142.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:06.523538 kubelet[2365]: E1008 19:43:06.523500 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.142.189:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:06.689991 kubelet[2365]: W1008 19:43:06.689739 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.142.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:06.689991 kubelet[2365]: E1008 19:43:06.689963 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.142.189:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:06.692967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132920067.mount: Deactivated successfully. Oct 8 19:43:06.702256 containerd[1464]: time="2024-10-08T19:43:06.701907542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:06.703481 containerd[1464]: time="2024-10-08T19:43:06.703273227Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:06.704936 containerd[1464]: time="2024-10-08T19:43:06.704863079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:43:06.706335 containerd[1464]: time="2024-10-08T19:43:06.706092560Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:06.708068 containerd[1464]: time="2024-10-08T19:43:06.707970701Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:43:06.709886 containerd[1464]: time="2024-10-08T19:43:06.709826002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Oct 8 19:43:06.711536 containerd[1464]: time="2024-10-08T19:43:06.711438975Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:06.715279 containerd[1464]: time="2024-10-08T19:43:06.715198059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:43:06.717752 containerd[1464]: time="2024-10-08T19:43:06.717247406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.152209ms" Oct 8 19:43:06.721517 containerd[1464]: time="2024-10-08T19:43:06.721465985Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.960324ms" Oct 8 19:43:06.722355 containerd[1464]: time="2024-10-08T19:43:06.722321413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.429635ms" Oct 8 19:43:06.867083 containerd[1464]: time="2024-10-08T19:43:06.866963571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:06.867294 containerd[1464]: time="2024-10-08T19:43:06.867158057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:06.867294 containerd[1464]: time="2024-10-08T19:43:06.867206939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:06.867294 containerd[1464]: time="2024-10-08T19:43:06.867233980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:06.867469 containerd[1464]: time="2024-10-08T19:43:06.867385585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:06.867679 containerd[1464]: time="2024-10-08T19:43:06.867618312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:06.867731 containerd[1464]: time="2024-10-08T19:43:06.867709675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:06.868021 containerd[1464]: time="2024-10-08T19:43:06.867986284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:06.868149 containerd[1464]: time="2024-10-08T19:43:06.868035766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:06.868462 containerd[1464]: time="2024-10-08T19:43:06.868413659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:06.868462 containerd[1464]: time="2024-10-08T19:43:06.868444900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:06.868620 containerd[1464]: time="2024-10-08T19:43:06.868552263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:06.896652 systemd[1]: Started cri-containerd-6de6cb8f41843ab98306d78108b84f5cd811affe34d4f2738770a8d6df92d6aa.scope - libcontainer container 6de6cb8f41843ab98306d78108b84f5cd811affe34d4f2738770a8d6df92d6aa. Oct 8 19:43:06.900430 kubelet[2365]: W1008 19:43:06.900342 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.142.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:06.901334 kubelet[2365]: E1008 19:43:06.900956 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.142.189:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:06.905135 systemd[1]: Started cri-containerd-5a26cd6c11df257fadf14a0040d59af5532471bbbdc4d75f901b1ae486404008.scope - libcontainer container 5a26cd6c11df257fadf14a0040d59af5532471bbbdc4d75f901b1ae486404008. Oct 8 19:43:06.908493 systemd[1]: Started cri-containerd-bdf46e280fd2bac2e532958a17c5de3f3535c19a0184e277b7ad5714f1219f9a.scope - libcontainer container bdf46e280fd2bac2e532958a17c5de3f3535c19a0184e277b7ad5714f1219f9a. Oct 8 19:43:06.967217 containerd[1464]: time="2024-10-08T19:43:06.966757773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-5-28a2d443fc,Uid:f4e917432488e7e0a7b639fba3454fed,Namespace:kube-system,Attempt:0,} returns sandbox id \"6de6cb8f41843ab98306d78108b84f5cd811affe34d4f2738770a8d6df92d6aa\"" Oct 8 19:43:06.976029 containerd[1464]: time="2024-10-08T19:43:06.975951476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-5-28a2d443fc,Uid:244bde55cb325834d8317ac5dcbaa5dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf46e280fd2bac2e532958a17c5de3f3535c19a0184e277b7ad5714f1219f9a\"" Oct 8 19:43:06.979927 containerd[1464]: time="2024-10-08T19:43:06.979844444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-5-28a2d443fc,Uid:23cef816f5ee6340f72279a9dad25836,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a26cd6c11df257fadf14a0040d59af5532471bbbdc4d75f901b1ae486404008\"" Oct 8 19:43:06.983645 containerd[1464]: time="2024-10-08T19:43:06.983496844Z" level=info msg="CreateContainer within sandbox \"6de6cb8f41843ab98306d78108b84f5cd811affe34d4f2738770a8d6df92d6aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:43:06.984110 containerd[1464]: time="2024-10-08T19:43:06.983863816Z" level=info msg="CreateContainer within sandbox \"bdf46e280fd2bac2e532958a17c5de3f3535c19a0184e277b7ad5714f1219f9a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:43:06.984933 containerd[1464]: time="2024-10-08T19:43:06.984896290Z" level=info msg="CreateContainer within sandbox \"5a26cd6c11df257fadf14a0040d59af5532471bbbdc4d75f901b1ae486404008\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:43:07.015145 containerd[1464]: time="2024-10-08T19:43:07.015054270Z" level=info msg="CreateContainer within sandbox \"6de6cb8f41843ab98306d78108b84f5cd811affe34d4f2738770a8d6df92d6aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7ff1001be2835a33f1d9e8bcb7df1ed7a817c686ba6cf34904a0692fdff2cef0\"" Oct 8 19:43:07.017701 containerd[1464]: time="2024-10-08T19:43:07.016343551Z" level=info msg="StartContainer for \"7ff1001be2835a33f1d9e8bcb7df1ed7a817c686ba6cf34904a0692fdff2cef0\"" Oct 8 19:43:07.018061 containerd[1464]: time="2024-10-08T19:43:07.018007285Z" level=info msg="CreateContainer within sandbox \"5a26cd6c11df257fadf14a0040d59af5532471bbbdc4d75f901b1ae486404008\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"878120e757659c4c562b44a06b2df657b892e4305d89d99c70aaf415b3161f85\"" Oct 8 19:43:07.019754 containerd[1464]: time="2024-10-08T19:43:07.018575143Z" level=info msg="StartContainer for \"878120e757659c4c562b44a06b2df657b892e4305d89d99c70aaf415b3161f85\"" Oct 8 19:43:07.021691 containerd[1464]: time="2024-10-08T19:43:07.021641161Z" level=info msg="CreateContainer within sandbox \"bdf46e280fd2bac2e532958a17c5de3f3535c19a0184e277b7ad5714f1219f9a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d6990a85e55cc0f5c171ba63d8aa327131e90ff993da46b61f1fc6b266954ef4\"" Oct 8 19:43:07.022444 containerd[1464]: time="2024-10-08T19:43:07.022413426Z" level=info msg="StartContainer for \"d6990a85e55cc0f5c171ba63d8aa327131e90ff993da46b61f1fc6b266954ef4\"" Oct 8 19:43:07.058875 systemd[1]: Started cri-containerd-d6990a85e55cc0f5c171ba63d8aa327131e90ff993da46b61f1fc6b266954ef4.scope - libcontainer container d6990a85e55cc0f5c171ba63d8aa327131e90ff993da46b61f1fc6b266954ef4. Oct 8 19:43:07.071121 systemd[1]: Started cri-containerd-7ff1001be2835a33f1d9e8bcb7df1ed7a817c686ba6cf34904a0692fdff2cef0.scope - libcontainer container 7ff1001be2835a33f1d9e8bcb7df1ed7a817c686ba6cf34904a0692fdff2cef0. Oct 8 19:43:07.080610 systemd[1]: Started cri-containerd-878120e757659c4c562b44a06b2df657b892e4305d89d99c70aaf415b3161f85.scope - libcontainer container 878120e757659c4c562b44a06b2df657b892e4305d89d99c70aaf415b3161f85. Oct 8 19:43:07.091951 kubelet[2365]: E1008 19:43:07.091840 2365 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.142.189:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-5-28a2d443fc?timeout=10s\": dial tcp 49.13.142.189:6443: connect: connection refused" interval="1.6s" Oct 8 19:43:07.160247 containerd[1464]: time="2024-10-08T19:43:07.159439816Z" level=info msg="StartContainer for \"7ff1001be2835a33f1d9e8bcb7df1ed7a817c686ba6cf34904a0692fdff2cef0\" returns successfully" Oct 8 19:43:07.160247 containerd[1464]: time="2024-10-08T19:43:07.160157919Z" level=info msg="StartContainer for \"d6990a85e55cc0f5c171ba63d8aa327131e90ff993da46b61f1fc6b266954ef4\" returns successfully" Oct 8 19:43:07.160247 containerd[1464]: time="2024-10-08T19:43:07.160213240Z" level=info msg="StartContainer for \"878120e757659c4c562b44a06b2df657b892e4305d89d99c70aaf415b3161f85\" returns successfully" Oct 8 19:43:07.201824 kubelet[2365]: W1008 19:43:07.201716 2365 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.142.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-5-28a2d443fc&limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:07.201824 kubelet[2365]: E1008 19:43:07.201799 2365 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.142.189:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-5-28a2d443fc&limit=500&resourceVersion=0": dial tcp 49.13.142.189:6443: connect: connection refused Oct 8 19:43:07.210098 kubelet[2365]: I1008 19:43:07.209923 2365 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:07.210650 kubelet[2365]: E1008 19:43:07.210617 2365 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.142.189:6443/api/v1/nodes\": dial tcp 49.13.142.189:6443: connect: connection refused" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:08.816295 kubelet[2365]: I1008 19:43:08.815562 2365 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:09.895522 kubelet[2365]: E1008 19:43:09.895479 2365 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-2-2-5-28a2d443fc\" not found" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:09.986701 kubelet[2365]: I1008 19:43:09.986426 2365 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:10.668822 kubelet[2365]: I1008 19:43:10.668492 2365 apiserver.go:52] "Watching apiserver" Oct 8 19:43:10.691149 kubelet[2365]: I1008 19:43:10.691082 2365 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:43:12.139151 systemd[1]: Reloading requested from client PID 2646 ('systemctl') (unit session-7.scope)... Oct 8 19:43:12.139171 systemd[1]: Reloading... Oct 8 19:43:12.254501 zram_generator::config[2686]: No configuration found. Oct 8 19:43:12.366084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:43:12.452523 systemd[1]: Reloading finished in 313 ms. Oct 8 19:43:12.498530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:12.512857 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:43:12.513179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:12.513347 systemd[1]: kubelet.service: Consumed 1.501s CPU time, 113.3M memory peak, 0B memory swap peak. Oct 8 19:43:12.518828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:43:12.650093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:43:12.661993 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:43:12.724492 kubelet[2728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:12.724492 kubelet[2728]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:43:12.724492 kubelet[2728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:43:12.724492 kubelet[2728]: I1008 19:43:12.723921 2728 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:43:12.731524 kubelet[2728]: I1008 19:43:12.731474 2728 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Oct 8 19:43:12.731524 kubelet[2728]: I1008 19:43:12.731512 2728 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:43:12.731758 kubelet[2728]: I1008 19:43:12.731741 2728 server.go:927] "Client rotation is on, will bootstrap in background" Oct 8 19:43:12.734580 kubelet[2728]: I1008 19:43:12.733509 2728 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:43:12.735112 kubelet[2728]: I1008 19:43:12.735078 2728 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:43:12.741842 kubelet[2728]: I1008 19:43:12.741809 2728 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:43:12.742036 kubelet[2728]: I1008 19:43:12.742002 2728 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:43:12.742211 kubelet[2728]: I1008 19:43:12.742035 2728 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975-2-2-5-28a2d443fc","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:43:12.742301 kubelet[2728]: I1008 19:43:12.742218 2728 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:43:12.742301 kubelet[2728]: I1008 19:43:12.742227 2728 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:43:12.742301 kubelet[2728]: I1008 19:43:12.742265 2728 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:12.742421 kubelet[2728]: I1008 19:43:12.742396 2728 kubelet.go:400] "Attempting to sync node with API server" Oct 8 19:43:12.742421 kubelet[2728]: I1008 19:43:12.742411 2728 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:43:12.742874 kubelet[2728]: I1008 19:43:12.742851 2728 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:43:12.745155 kubelet[2728]: I1008 19:43:12.745110 2728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:43:12.755574 kubelet[2728]: I1008 19:43:12.755541 2728 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:43:12.758198 kubelet[2728]: I1008 19:43:12.755948 2728 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:43:12.758198 kubelet[2728]: I1008 19:43:12.756543 2728 server.go:1264] "Started kubelet" Oct 8 19:43:12.760217 kubelet[2728]: I1008 19:43:12.760185 2728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:43:12.761502 kubelet[2728]: I1008 19:43:12.761428 2728 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:43:12.765402 kubelet[2728]: I1008 19:43:12.764290 2728 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:43:12.765402 kubelet[2728]: I1008 19:43:12.764669 2728 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 8 19:43:12.765402 kubelet[2728]: I1008 19:43:12.764844 2728 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:43:12.765576 kubelet[2728]: I1008 19:43:12.765451 2728 server.go:455] "Adding debug handlers to kubelet server" Oct 8 19:43:12.767582 kubelet[2728]: I1008 19:43:12.766408 2728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:43:12.767582 kubelet[2728]: I1008 19:43:12.766660 2728 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:43:12.768314 kubelet[2728]: I1008 19:43:12.768256 2728 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:43:12.768435 kubelet[2728]: I1008 19:43:12.768409 2728 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:43:12.773382 kubelet[2728]: I1008 19:43:12.770661 2728 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:43:12.787755 kubelet[2728]: I1008 19:43:12.787709 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:43:12.796119 kubelet[2728]: I1008 19:43:12.795476 2728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:43:12.796119 kubelet[2728]: I1008 19:43:12.795607 2728 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:43:12.796119 kubelet[2728]: I1008 19:43:12.795630 2728 kubelet.go:2337] "Starting kubelet main sync loop" Oct 8 19:43:12.796119 kubelet[2728]: E1008 19:43:12.795780 2728 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:43:12.865601 kubelet[2728]: I1008 19:43:12.865518 2728 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:43:12.865601 kubelet[2728]: I1008 19:43:12.865550 2728 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:43:12.865848 kubelet[2728]: I1008 19:43:12.865817 2728 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:43:12.866460 kubelet[2728]: I1008 19:43:12.866016 2728 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:43:12.866460 kubelet[2728]: I1008 19:43:12.866035 2728 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:43:12.866460 kubelet[2728]: I1008 19:43:12.866056 2728 policy_none.go:49] "None policy: Start" Oct 8 19:43:12.868613 kubelet[2728]: I1008 19:43:12.868564 2728 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:43:12.868613 kubelet[2728]: I1008 19:43:12.868597 2728 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:43:12.868775 kubelet[2728]: I1008 19:43:12.868765 2728 state_mem.go:75] "Updated machine memory state" Oct 8 19:43:12.873042 kubelet[2728]: I1008 19:43:12.873005 2728 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:12.880718 kubelet[2728]: I1008 19:43:12.880111 2728 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:43:12.880718 kubelet[2728]: I1008 19:43:12.880341 2728 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:43:12.880718 kubelet[2728]: I1008 19:43:12.880465 2728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:43:12.892914 kubelet[2728]: I1008 19:43:12.892872 2728 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:12.893050 kubelet[2728]: I1008 19:43:12.892963 2728 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:12.898496 kubelet[2728]: I1008 19:43:12.896004 2728 topology_manager.go:215] "Topology Admit Handler" podUID="f4e917432488e7e0a7b639fba3454fed" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:12.898496 kubelet[2728]: I1008 19:43:12.897910 2728 topology_manager.go:215] "Topology Admit Handler" podUID="244bde55cb325834d8317ac5dcbaa5dd" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:12.898496 kubelet[2728]: I1008 19:43:12.897957 2728 topology_manager.go:215] "Topology Admit Handler" podUID="23cef816f5ee6340f72279a9dad25836" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:12.921769 kubelet[2728]: E1008 19:43:12.921694 2728 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" already exists" pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.066528 kubelet[2728]: I1008 19:43:13.066378 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.066528 kubelet[2728]: I1008 19:43:13.066435 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.066528 kubelet[2728]: I1008 19:43:13.066457 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.066528 kubelet[2728]: I1008 19:43:13.066485 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f4e917432488e7e0a7b639fba3454fed-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-5-28a2d443fc\" (UID: \"f4e917432488e7e0a7b639fba3454fed\") " pod="kube-system/kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.066528 kubelet[2728]: I1008 19:43:13.066506 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f4e917432488e7e0a7b639fba3454fed-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-5-28a2d443fc\" (UID: \"f4e917432488e7e0a7b639fba3454fed\") " pod="kube-system/kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.067044 kubelet[2728]: I1008 19:43:13.067019 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f4e917432488e7e0a7b639fba3454fed-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-5-28a2d443fc\" (UID: \"f4e917432488e7e0a7b639fba3454fed\") " pod="kube-system/kube-apiserver-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.067243 kubelet[2728]: I1008 19:43:13.067051 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.067243 kubelet[2728]: I1008 19:43:13.067071 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/244bde55cb325834d8317ac5dcbaa5dd-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-5-28a2d443fc\" (UID: \"244bde55cb325834d8317ac5dcbaa5dd\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.067243 kubelet[2728]: I1008 19:43:13.067090 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23cef816f5ee6340f72279a9dad25836-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-5-28a2d443fc\" (UID: \"23cef816f5ee6340f72279a9dad25836\") " pod="kube-system/kube-scheduler-ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:13.746754 kubelet[2728]: I1008 19:43:13.746693 2728 apiserver.go:52] "Watching apiserver" Oct 8 19:43:13.764998 kubelet[2728]: I1008 19:43:13.764926 2728 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Oct 8 19:43:13.888546 kubelet[2728]: I1008 19:43:13.887078 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-2-2-5-28a2d443fc" podStartSLOduration=2.88705998 podStartE2EDuration="2.88705998s" podCreationTimestamp="2024-10-08 19:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:13.881678752 +0000 UTC m=+1.215151296" watchObservedRunningTime="2024-10-08 19:43:13.88705998 +0000 UTC m=+1.220532484" Oct 8 19:43:14.019472 kubelet[2728]: I1008 19:43:14.019268 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-2-2-5-28a2d443fc" podStartSLOduration=2.019245874 podStartE2EDuration="2.019245874s" podCreationTimestamp="2024-10-08 19:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:13.994102636 +0000 UTC m=+1.327575180" watchObservedRunningTime="2024-10-08 19:43:14.019245874 +0000 UTC m=+1.352718418" Oct 8 19:43:14.054041 kubelet[2728]: I1008 19:43:14.053978 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-2-2-5-28a2d443fc" podStartSLOduration=2.053951482 podStartE2EDuration="2.053951482s" podCreationTimestamp="2024-10-08 19:43:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:14.020440186 +0000 UTC m=+1.353912730" watchObservedRunningTime="2024-10-08 19:43:14.053951482 +0000 UTC m=+1.387424026" Oct 8 19:43:18.190964 sudo[1852]: pam_unix(sudo:session): session closed for user root Oct 8 19:43:18.353210 sshd[1849]: pam_unix(sshd:session): session closed for user core Oct 8 19:43:18.357806 systemd[1]: sshd@6-49.13.142.189:22-139.178.89.65:40278.service: Deactivated successfully. Oct 8 19:43:18.360242 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:43:18.361501 systemd[1]: session-7.scope: Consumed 7.217s CPU time, 137.1M memory peak, 0B memory swap peak. Oct 8 19:43:18.364647 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:43:18.365909 systemd-logind[1449]: Removed session 7. Oct 8 19:43:27.345857 kubelet[2728]: I1008 19:43:27.345808 2728 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:43:27.348165 containerd[1464]: time="2024-10-08T19:43:27.347604048Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:43:27.349119 kubelet[2728]: I1008 19:43:27.347835 2728 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:43:27.526399 kubelet[2728]: I1008 19:43:27.525977 2728 topology_manager.go:215] "Topology Admit Handler" podUID="beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57" podNamespace="kube-system" podName="kube-proxy-2kvfn" Oct 8 19:43:27.536717 systemd[1]: Created slice kubepods-besteffort-podbeb1d5c2_7ea6_4c1b_8e22_a224e0bfbe57.slice - libcontainer container kubepods-besteffort-podbeb1d5c2_7ea6_4c1b_8e22_a224e0bfbe57.slice. Oct 8 19:43:27.548564 kubelet[2728]: W1008 19:43:27.548482 2728 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3975-2-2-5-28a2d443fc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-2-2-5-28a2d443fc' and this object Oct 8 19:43:27.548564 kubelet[2728]: E1008 19:43:27.548533 2728 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3975-2-2-5-28a2d443fc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-2-2-5-28a2d443fc' and this object Oct 8 19:43:27.549061 kubelet[2728]: W1008 19:43:27.549022 2728 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-2-2-5-28a2d443fc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-2-2-5-28a2d443fc' and this object Oct 8 19:43:27.549061 kubelet[2728]: E1008 19:43:27.549045 2728 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975-2-2-5-28a2d443fc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3975-2-2-5-28a2d443fc' and this object Oct 8 19:43:27.550894 kubelet[2728]: I1008 19:43:27.549803 2728 topology_manager.go:215] "Topology Admit Handler" podUID="8141870f-3b0b-4020-a725-0652e234a8d7" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-6snkb" Oct 8 19:43:27.562465 systemd[1]: Created slice kubepods-besteffort-pod8141870f_3b0b_4020_a725_0652e234a8d7.slice - libcontainer container kubepods-besteffort-pod8141870f_3b0b_4020_a725_0652e234a8d7.slice. Oct 8 19:43:27.567208 kubelet[2728]: I1008 19:43:27.566966 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57-lib-modules\") pod \"kube-proxy-2kvfn\" (UID: \"beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57\") " pod="kube-system/kube-proxy-2kvfn" Oct 8 19:43:27.567208 kubelet[2728]: I1008 19:43:27.567007 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57-kube-proxy\") pod \"kube-proxy-2kvfn\" (UID: \"beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57\") " pod="kube-system/kube-proxy-2kvfn" Oct 8 19:43:27.567208 kubelet[2728]: I1008 19:43:27.567134 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzjz9\" (UniqueName: \"kubernetes.io/projected/beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57-kube-api-access-wzjz9\") pod \"kube-proxy-2kvfn\" (UID: \"beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57\") " pod="kube-system/kube-proxy-2kvfn" Oct 8 19:43:27.567208 kubelet[2728]: I1008 19:43:27.567160 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8141870f-3b0b-4020-a725-0652e234a8d7-var-lib-calico\") pod \"tigera-operator-77f994b5bb-6snkb\" (UID: \"8141870f-3b0b-4020-a725-0652e234a8d7\") " pod="tigera-operator/tigera-operator-77f994b5bb-6snkb" Oct 8 19:43:27.567208 kubelet[2728]: I1008 19:43:27.567177 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25b8\" (UniqueName: \"kubernetes.io/projected/8141870f-3b0b-4020-a725-0652e234a8d7-kube-api-access-g25b8\") pod \"tigera-operator-77f994b5bb-6snkb\" (UID: \"8141870f-3b0b-4020-a725-0652e234a8d7\") " pod="tigera-operator/tigera-operator-77f994b5bb-6snkb" Oct 8 19:43:27.568593 kubelet[2728]: I1008 19:43:27.568118 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57-xtables-lock\") pod \"kube-proxy-2kvfn\" (UID: \"beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57\") " pod="kube-system/kube-proxy-2kvfn" Oct 8 19:43:27.871053 containerd[1464]: time="2024-10-08T19:43:27.870536960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-6snkb,Uid:8141870f-3b0b-4020-a725-0652e234a8d7,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:43:27.903109 containerd[1464]: time="2024-10-08T19:43:27.902559027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:27.903109 containerd[1464]: time="2024-10-08T19:43:27.902632948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:27.903109 containerd[1464]: time="2024-10-08T19:43:27.902648069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:27.903109 containerd[1464]: time="2024-10-08T19:43:27.902657469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:27.936777 systemd[1]: Started cri-containerd-70a05bfad57f423a16188a94bc6d5fc53d5444ff6ea9e22a341b71336669d016.scope - libcontainer container 70a05bfad57f423a16188a94bc6d5fc53d5444ff6ea9e22a341b71336669d016. Oct 8 19:43:27.984290 containerd[1464]: time="2024-10-08T19:43:27.984248985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-6snkb,Uid:8141870f-3b0b-4020-a725-0652e234a8d7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"70a05bfad57f423a16188a94bc6d5fc53d5444ff6ea9e22a341b71336669d016\"" Oct 8 19:43:27.986573 containerd[1464]: time="2024-10-08T19:43:27.986421228Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:43:28.746655 containerd[1464]: time="2024-10-08T19:43:28.746498934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kvfn,Uid:beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:28.782935 containerd[1464]: time="2024-10-08T19:43:28.782779508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:28.782935 containerd[1464]: time="2024-10-08T19:43:28.782856709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:28.782935 containerd[1464]: time="2024-10-08T19:43:28.782883030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:28.782935 containerd[1464]: time="2024-10-08T19:43:28.782897390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:28.808633 systemd[1]: Started cri-containerd-70676879bf2309de4c781cc68165ef42bdd35389e0f723598a385e36d6706922.scope - libcontainer container 70676879bf2309de4c781cc68165ef42bdd35389e0f723598a385e36d6706922. Oct 8 19:43:28.837794 containerd[1464]: time="2024-10-08T19:43:28.837651597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kvfn,Uid:beb1d5c2-7ea6-4c1b-8e22-a224e0bfbe57,Namespace:kube-system,Attempt:0,} returns sandbox id \"70676879bf2309de4c781cc68165ef42bdd35389e0f723598a385e36d6706922\"" Oct 8 19:43:28.848373 containerd[1464]: time="2024-10-08T19:43:28.848171039Z" level=info msg="CreateContainer within sandbox \"70676879bf2309de4c781cc68165ef42bdd35389e0f723598a385e36d6706922\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:43:28.876736 containerd[1464]: time="2024-10-08T19:43:28.876669464Z" level=info msg="CreateContainer within sandbox \"70676879bf2309de4c781cc68165ef42bdd35389e0f723598a385e36d6706922\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65904bf83d1d5222621fe9360ec2d8d0971fef0a8607318c3885fdb0c853e7fb\"" Oct 8 19:43:28.878319 containerd[1464]: time="2024-10-08T19:43:28.878263614Z" level=info msg="StartContainer for \"65904bf83d1d5222621fe9360ec2d8d0971fef0a8607318c3885fdb0c853e7fb\"" Oct 8 19:43:28.910670 systemd[1]: Started cri-containerd-65904bf83d1d5222621fe9360ec2d8d0971fef0a8607318c3885fdb0c853e7fb.scope - libcontainer container 65904bf83d1d5222621fe9360ec2d8d0971fef0a8607318c3885fdb0c853e7fb. Oct 8 19:43:28.950479 containerd[1464]: time="2024-10-08T19:43:28.949785302Z" level=info msg="StartContainer for \"65904bf83d1d5222621fe9360ec2d8d0971fef0a8607318c3885fdb0c853e7fb\" returns successfully" Oct 8 19:43:29.893916 kubelet[2728]: I1008 19:43:29.893620 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2kvfn" podStartSLOduration=2.893600057 podStartE2EDuration="2.893600057s" podCreationTimestamp="2024-10-08 19:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:29.892503876 +0000 UTC m=+17.225976420" watchObservedRunningTime="2024-10-08 19:43:29.893600057 +0000 UTC m=+17.227072601" Oct 8 19:43:30.084917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1960788359.mount: Deactivated successfully. Oct 8 19:43:30.385110 containerd[1464]: time="2024-10-08T19:43:30.385035329Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:30.386897 containerd[1464]: time="2024-10-08T19:43:30.386841763Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485907" Oct 8 19:43:30.387833 containerd[1464]: time="2024-10-08T19:43:30.387800100Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:30.390712 containerd[1464]: time="2024-10-08T19:43:30.390660112Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:30.392224 containerd[1464]: time="2024-10-08T19:43:30.392176180Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.405713271s" Oct 8 19:43:30.392224 containerd[1464]: time="2024-10-08T19:43:30.392217781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:43:30.398561 containerd[1464]: time="2024-10-08T19:43:30.397245793Z" level=info msg="CreateContainer within sandbox \"70a05bfad57f423a16188a94bc6d5fc53d5444ff6ea9e22a341b71336669d016\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:43:30.411317 containerd[1464]: time="2024-10-08T19:43:30.411189608Z" level=info msg="CreateContainer within sandbox \"70a05bfad57f423a16188a94bc6d5fc53d5444ff6ea9e22a341b71336669d016\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ff3d7251019349ff58eda365db98456be7dc89adcd75b0f5c06e9f101e3f89c7\"" Oct 8 19:43:30.413987 containerd[1464]: time="2024-10-08T19:43:30.411844740Z" level=info msg="StartContainer for \"ff3d7251019349ff58eda365db98456be7dc89adcd75b0f5c06e9f101e3f89c7\"" Oct 8 19:43:30.438563 systemd[1]: Started cri-containerd-ff3d7251019349ff58eda365db98456be7dc89adcd75b0f5c06e9f101e3f89c7.scope - libcontainer container ff3d7251019349ff58eda365db98456be7dc89adcd75b0f5c06e9f101e3f89c7. Oct 8 19:43:30.468941 containerd[1464]: time="2024-10-08T19:43:30.467315514Z" level=info msg="StartContainer for \"ff3d7251019349ff58eda365db98456be7dc89adcd75b0f5c06e9f101e3f89c7\" returns successfully" Oct 8 19:43:35.130431 kubelet[2728]: I1008 19:43:35.130315 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-6snkb" podStartSLOduration=5.723292994 podStartE2EDuration="8.13029697s" podCreationTimestamp="2024-10-08 19:43:27 +0000 UTC" firstStartedPulling="2024-10-08 19:43:27.98602338 +0000 UTC m=+15.319495884" lastFinishedPulling="2024-10-08 19:43:30.393027316 +0000 UTC m=+17.726499860" observedRunningTime="2024-10-08 19:43:30.900678721 +0000 UTC m=+18.234151265" watchObservedRunningTime="2024-10-08 19:43:35.13029697 +0000 UTC m=+22.463769474" Oct 8 19:43:35.130827 kubelet[2728]: I1008 19:43:35.130600 2728 topology_manager.go:215] "Topology Admit Handler" podUID="347d36fe-b2d6-4c0b-8426-0cca1f0b70ed" podNamespace="calico-system" podName="calico-typha-7cdfd8657c-kbw8v" Oct 8 19:43:35.141177 systemd[1]: Created slice kubepods-besteffort-pod347d36fe_b2d6_4c0b_8426_0cca1f0b70ed.slice - libcontainer container kubepods-besteffort-pod347d36fe_b2d6_4c0b_8426_0cca1f0b70ed.slice. Oct 8 19:43:35.220728 kubelet[2728]: I1008 19:43:35.220679 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/347d36fe-b2d6-4c0b-8426-0cca1f0b70ed-tigera-ca-bundle\") pod \"calico-typha-7cdfd8657c-kbw8v\" (UID: \"347d36fe-b2d6-4c0b-8426-0cca1f0b70ed\") " pod="calico-system/calico-typha-7cdfd8657c-kbw8v" Oct 8 19:43:35.220983 kubelet[2728]: I1008 19:43:35.220962 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bczh5\" (UniqueName: \"kubernetes.io/projected/347d36fe-b2d6-4c0b-8426-0cca1f0b70ed-kube-api-access-bczh5\") pod \"calico-typha-7cdfd8657c-kbw8v\" (UID: \"347d36fe-b2d6-4c0b-8426-0cca1f0b70ed\") " pod="calico-system/calico-typha-7cdfd8657c-kbw8v" Oct 8 19:43:35.221085 kubelet[2728]: I1008 19:43:35.221065 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/347d36fe-b2d6-4c0b-8426-0cca1f0b70ed-typha-certs\") pod \"calico-typha-7cdfd8657c-kbw8v\" (UID: \"347d36fe-b2d6-4c0b-8426-0cca1f0b70ed\") " pod="calico-system/calico-typha-7cdfd8657c-kbw8v" Oct 8 19:43:35.251263 kubelet[2728]: I1008 19:43:35.251211 2728 topology_manager.go:215] "Topology Admit Handler" podUID="8a9fdce0-13f6-4787-92e6-a78e36379ff2" podNamespace="calico-system" podName="calico-node-ts68d" Oct 8 19:43:35.262202 systemd[1]: Created slice kubepods-besteffort-pod8a9fdce0_13f6_4787_92e6_a78e36379ff2.slice - libcontainer container kubepods-besteffort-pod8a9fdce0_13f6_4787_92e6_a78e36379ff2.slice. Oct 8 19:43:35.322024 kubelet[2728]: I1008 19:43:35.321973 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a9fdce0-13f6-4787-92e6-a78e36379ff2-tigera-ca-bundle\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322024 kubelet[2728]: I1008 19:43:35.322019 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-cni-bin-dir\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322199 kubelet[2728]: I1008 19:43:35.322038 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-lib-modules\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322199 kubelet[2728]: I1008 19:43:35.322055 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-cni-log-dir\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322199 kubelet[2728]: I1008 19:43:35.322074 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4rdl\" (UniqueName: \"kubernetes.io/projected/8a9fdce0-13f6-4787-92e6-a78e36379ff2-kube-api-access-j4rdl\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322199 kubelet[2728]: I1008 19:43:35.322092 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-flexvol-driver-host\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322199 kubelet[2728]: I1008 19:43:35.322122 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8a9fdce0-13f6-4787-92e6-a78e36379ff2-node-certs\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322311 kubelet[2728]: I1008 19:43:35.322138 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-var-lib-calico\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322311 kubelet[2728]: I1008 19:43:35.322163 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-xtables-lock\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322311 kubelet[2728]: I1008 19:43:35.322185 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-policysync\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322311 kubelet[2728]: I1008 19:43:35.322199 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-var-run-calico\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.322311 kubelet[2728]: I1008 19:43:35.322216 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8a9fdce0-13f6-4787-92e6-a78e36379ff2-cni-net-dir\") pod \"calico-node-ts68d\" (UID: \"8a9fdce0-13f6-4787-92e6-a78e36379ff2\") " pod="calico-system/calico-node-ts68d" Oct 8 19:43:35.427053 kubelet[2728]: E1008 19:43:35.426349 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.427053 kubelet[2728]: W1008 19:43:35.426404 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.427053 kubelet[2728]: E1008 19:43:35.426425 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.428400 kubelet[2728]: E1008 19:43:35.427959 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.428747 kubelet[2728]: W1008 19:43:35.428601 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.428747 kubelet[2728]: E1008 19:43:35.428632 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.429742 kubelet[2728]: E1008 19:43:35.429633 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.429742 kubelet[2728]: W1008 19:43:35.429649 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.430510 kubelet[2728]: E1008 19:43:35.430417 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.430510 kubelet[2728]: W1008 19:43:35.430433 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.430601 kubelet[2728]: E1008 19:43:35.429861 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.433512 kubelet[2728]: E1008 19:43:35.430766 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.433728 kubelet[2728]: E1008 19:43:35.432437 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.433728 kubelet[2728]: W1008 19:43:35.433611 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.433855 kubelet[2728]: E1008 19:43:35.433840 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.438397 kubelet[2728]: E1008 19:43:35.436515 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.438397 kubelet[2728]: W1008 19:43:35.436674 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.438397 kubelet[2728]: E1008 19:43:35.436748 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.438397 kubelet[2728]: E1008 19:43:35.437053 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.438397 kubelet[2728]: W1008 19:43:35.437065 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.438397 kubelet[2728]: E1008 19:43:35.437670 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.438397 kubelet[2728]: E1008 19:43:35.437799 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.438397 kubelet[2728]: W1008 19:43:35.437812 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.439994 kubelet[2728]: E1008 19:43:35.439163 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.439994 kubelet[2728]: E1008 19:43:35.439274 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.439994 kubelet[2728]: W1008 19:43:35.439299 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.439994 kubelet[2728]: I1008 19:43:35.439403 2728 topology_manager.go:215] "Topology Admit Handler" podUID="ad896521-c957-4f2e-a42a-e7387295bb9d" podNamespace="calico-system" podName="csi-node-driver-rsxhd" Oct 8 19:43:35.439994 kubelet[2728]: E1008 19:43:35.439664 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsxhd" podUID="ad896521-c957-4f2e-a42a-e7387295bb9d" Oct 8 19:43:35.439994 kubelet[2728]: E1008 19:43:35.439964 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.439994 kubelet[2728]: E1008 19:43:35.439969 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.440326 kubelet[2728]: W1008 19:43:35.439990 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.440326 kubelet[2728]: E1008 19:43:35.440101 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.440713 kubelet[2728]: E1008 19:43:35.440678 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.440713 kubelet[2728]: W1008 19:43:35.440708 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.441483 kubelet[2728]: E1008 19:43:35.441441 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.441647 kubelet[2728]: E1008 19:43:35.441628 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.441693 kubelet[2728]: W1008 19:43:35.441647 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.441693 kubelet[2728]: E1008 19:43:35.441668 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.442152 kubelet[2728]: E1008 19:43:35.442117 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.442152 kubelet[2728]: W1008 19:43:35.442137 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.442314 kubelet[2728]: E1008 19:43:35.442264 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.443231 kubelet[2728]: E1008 19:43:35.443090 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.443231 kubelet[2728]: W1008 19:43:35.443111 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.444272 kubelet[2728]: E1008 19:43:35.444005 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.444390 kubelet[2728]: E1008 19:43:35.444338 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.444425 kubelet[2728]: W1008 19:43:35.444353 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.444425 kubelet[2728]: E1008 19:43:35.444406 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.445407 kubelet[2728]: E1008 19:43:35.445349 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.445407 kubelet[2728]: W1008 19:43:35.445401 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.445528 kubelet[2728]: E1008 19:43:35.445433 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.445722 kubelet[2728]: E1008 19:43:35.445706 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.445722 kubelet[2728]: W1008 19:43:35.445720 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.445783 kubelet[2728]: E1008 19:43:35.445731 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.450501 containerd[1464]: time="2024-10-08T19:43:35.450451425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cdfd8657c-kbw8v,Uid:347d36fe-b2d6-4c0b-8426-0cca1f0b70ed,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:35.464419 kubelet[2728]: E1008 19:43:35.462495 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.464419 kubelet[2728]: W1008 19:43:35.462527 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.464419 kubelet[2728]: E1008 19:43:35.462553 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.494132 containerd[1464]: time="2024-10-08T19:43:35.492405954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:35.494132 containerd[1464]: time="2024-10-08T19:43:35.492485675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:35.494132 containerd[1464]: time="2024-10-08T19:43:35.492548156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:35.494132 containerd[1464]: time="2024-10-08T19:43:35.492562836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:35.512589 kubelet[2728]: E1008 19:43:35.512439 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.512589 kubelet[2728]: W1008 19:43:35.512468 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.512589 kubelet[2728]: E1008 19:43:35.512551 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.514390 kubelet[2728]: E1008 19:43:35.513572 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.514390 kubelet[2728]: W1008 19:43:35.513592 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.514390 kubelet[2728]: E1008 19:43:35.513610 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.514390 kubelet[2728]: E1008 19:43:35.514302 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.514390 kubelet[2728]: W1008 19:43:35.514317 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.514390 kubelet[2728]: E1008 19:43:35.514331 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.518407 kubelet[2728]: E1008 19:43:35.516062 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.518407 kubelet[2728]: W1008 19:43:35.516082 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.518407 kubelet[2728]: E1008 19:43:35.516098 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.518407 kubelet[2728]: E1008 19:43:35.516918 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.518407 kubelet[2728]: W1008 19:43:35.516931 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.518407 kubelet[2728]: E1008 19:43:35.516945 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.518407 kubelet[2728]: E1008 19:43:35.517147 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.518407 kubelet[2728]: W1008 19:43:35.517155 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.518407 kubelet[2728]: E1008 19:43:35.517165 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.519639 systemd[1]: Started cri-containerd-9febe8b68939464a641d5c403e68969a5cb0aa62f4ca80360da9ade711a9bb3a.scope - libcontainer container 9febe8b68939464a641d5c403e68969a5cb0aa62f4ca80360da9ade711a9bb3a. Oct 8 19:43:35.520303 kubelet[2728]: E1008 19:43:35.520259 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.520303 kubelet[2728]: W1008 19:43:35.520285 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.520407 kubelet[2728]: E1008 19:43:35.520309 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.523340 kubelet[2728]: E1008 19:43:35.520597 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.523340 kubelet[2728]: W1008 19:43:35.520611 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.523340 kubelet[2728]: E1008 19:43:35.520621 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.523340 kubelet[2728]: E1008 19:43:35.522612 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.523340 kubelet[2728]: W1008 19:43:35.522640 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.523340 kubelet[2728]: E1008 19:43:35.522662 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.523340 kubelet[2728]: E1008 19:43:35.522998 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.523340 kubelet[2728]: W1008 19:43:35.523008 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.523340 kubelet[2728]: E1008 19:43:35.523018 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.526666 kubelet[2728]: E1008 19:43:35.526632 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.526666 kubelet[2728]: W1008 19:43:35.526659 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.526802 kubelet[2728]: E1008 19:43:35.526682 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.527425 kubelet[2728]: E1008 19:43:35.527227 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.527425 kubelet[2728]: W1008 19:43:35.527244 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.527425 kubelet[2728]: E1008 19:43:35.527260 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.528310 kubelet[2728]: E1008 19:43:35.528287 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.528761 kubelet[2728]: W1008 19:43:35.528303 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.528761 kubelet[2728]: E1008 19:43:35.528403 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.529027 kubelet[2728]: E1008 19:43:35.529005 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.529027 kubelet[2728]: W1008 19:43:35.529024 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.529106 kubelet[2728]: E1008 19:43:35.529038 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.529755 kubelet[2728]: E1008 19:43:35.529729 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.529755 kubelet[2728]: W1008 19:43:35.529748 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.529991 kubelet[2728]: E1008 19:43:35.529764 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.530267 kubelet[2728]: E1008 19:43:35.530244 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.530267 kubelet[2728]: W1008 19:43:35.530260 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.530470 kubelet[2728]: E1008 19:43:35.530447 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.531210 kubelet[2728]: E1008 19:43:35.531191 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.531210 kubelet[2728]: W1008 19:43:35.531206 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.531304 kubelet[2728]: E1008 19:43:35.531219 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.531549 kubelet[2728]: E1008 19:43:35.531532 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.531659 kubelet[2728]: W1008 19:43:35.531638 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.531771 kubelet[2728]: E1008 19:43:35.531659 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.532434 kubelet[2728]: E1008 19:43:35.532412 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.532434 kubelet[2728]: W1008 19:43:35.532427 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.532434 kubelet[2728]: E1008 19:43:35.532439 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.532681 kubelet[2728]: E1008 19:43:35.532666 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.532681 kubelet[2728]: W1008 19:43:35.532677 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.532776 kubelet[2728]: E1008 19:43:35.532686 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.533193 kubelet[2728]: E1008 19:43:35.533172 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.533193 kubelet[2728]: W1008 19:43:35.533187 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.533193 kubelet[2728]: E1008 19:43:35.533198 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.533495 kubelet[2728]: I1008 19:43:35.533227 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad896521-c957-4f2e-a42a-e7387295bb9d-varrun\") pod \"csi-node-driver-rsxhd\" (UID: \"ad896521-c957-4f2e-a42a-e7387295bb9d\") " pod="calico-system/csi-node-driver-rsxhd" Oct 8 19:43:35.533753 kubelet[2728]: E1008 19:43:35.533734 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.533753 kubelet[2728]: W1008 19:43:35.533751 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.533953 kubelet[2728]: E1008 19:43:35.533768 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.533953 kubelet[2728]: I1008 19:43:35.533791 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7298z\" (UniqueName: \"kubernetes.io/projected/ad896521-c957-4f2e-a42a-e7387295bb9d-kube-api-access-7298z\") pod \"csi-node-driver-rsxhd\" (UID: \"ad896521-c957-4f2e-a42a-e7387295bb9d\") " pod="calico-system/csi-node-driver-rsxhd" Oct 8 19:43:35.534380 kubelet[2728]: E1008 19:43:35.534346 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.534924 kubelet[2728]: W1008 19:43:35.534840 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.534924 kubelet[2728]: E1008 19:43:35.534872 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.535260 kubelet[2728]: E1008 19:43:35.535213 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.535260 kubelet[2728]: W1008 19:43:35.535224 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.536527 kubelet[2728]: E1008 19:43:35.536411 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.536887 kubelet[2728]: E1008 19:43:35.536770 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.536887 kubelet[2728]: W1008 19:43:35.536787 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.537079 kubelet[2728]: E1008 19:43:35.536999 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.537079 kubelet[2728]: I1008 19:43:35.537036 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad896521-c957-4f2e-a42a-e7387295bb9d-socket-dir\") pod \"csi-node-driver-rsxhd\" (UID: \"ad896521-c957-4f2e-a42a-e7387295bb9d\") " pod="calico-system/csi-node-driver-rsxhd" Oct 8 19:43:35.537194 kubelet[2728]: E1008 19:43:35.537184 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.537278 kubelet[2728]: W1008 19:43:35.537236 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.537278 kubelet[2728]: E1008 19:43:35.537260 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.537608 kubelet[2728]: E1008 19:43:35.537566 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.537608 kubelet[2728]: W1008 19:43:35.537587 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.537608 kubelet[2728]: E1008 19:43:35.537607 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.537717 kubelet[2728]: I1008 19:43:35.537629 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad896521-c957-4f2e-a42a-e7387295bb9d-kubelet-dir\") pod \"csi-node-driver-rsxhd\" (UID: \"ad896521-c957-4f2e-a42a-e7387295bb9d\") " pod="calico-system/csi-node-driver-rsxhd" Oct 8 19:43:35.537977 kubelet[2728]: E1008 19:43:35.537810 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.537977 kubelet[2728]: W1008 19:43:35.537828 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.537977 kubelet[2728]: E1008 19:43:35.537840 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.537977 kubelet[2728]: E1008 19:43:35.537972 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.537977 kubelet[2728]: W1008 19:43:35.537979 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.538539 kubelet[2728]: E1008 19:43:35.537987 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.538539 kubelet[2728]: E1008 19:43:35.538134 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.538539 kubelet[2728]: W1008 19:43:35.538142 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.538539 kubelet[2728]: E1008 19:43:35.538155 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.538539 kubelet[2728]: I1008 19:43:35.538173 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad896521-c957-4f2e-a42a-e7387295bb9d-registration-dir\") pod \"csi-node-driver-rsxhd\" (UID: \"ad896521-c957-4f2e-a42a-e7387295bb9d\") " pod="calico-system/csi-node-driver-rsxhd" Oct 8 19:43:35.538539 kubelet[2728]: E1008 19:43:35.538313 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.538539 kubelet[2728]: W1008 19:43:35.538322 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.538539 kubelet[2728]: E1008 19:43:35.538331 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.539018 kubelet[2728]: E1008 19:43:35.538598 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.539018 kubelet[2728]: W1008 19:43:35.538609 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.539018 kubelet[2728]: E1008 19:43:35.538620 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.539018 kubelet[2728]: E1008 19:43:35.538951 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.539018 kubelet[2728]: W1008 19:43:35.538962 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.539512 kubelet[2728]: E1008 19:43:35.539088 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.539512 kubelet[2728]: E1008 19:43:35.539386 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.539512 kubelet[2728]: W1008 19:43:35.539397 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.539512 kubelet[2728]: E1008 19:43:35.539408 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.540559 kubelet[2728]: E1008 19:43:35.540530 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.540559 kubelet[2728]: W1008 19:43:35.540553 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.540652 kubelet[2728]: E1008 19:43:35.540573 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.568954 containerd[1464]: time="2024-10-08T19:43:35.568907049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ts68d,Uid:8a9fdce0-13f6-4787-92e6-a78e36379ff2,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:35.611207 containerd[1464]: time="2024-10-08T19:43:35.608323736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:35.611207 containerd[1464]: time="2024-10-08T19:43:35.608424138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:35.611207 containerd[1464]: time="2024-10-08T19:43:35.608444938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:35.611207 containerd[1464]: time="2024-10-08T19:43:35.608583661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:35.617177 containerd[1464]: time="2024-10-08T19:43:35.617098320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cdfd8657c-kbw8v,Uid:347d36fe-b2d6-4c0b-8426-0cca1f0b70ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"9febe8b68939464a641d5c403e68969a5cb0aa62f4ca80360da9ade711a9bb3a\"" Oct 8 19:43:35.621830 containerd[1464]: time="2024-10-08T19:43:35.621795597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:43:35.637606 systemd[1]: Started cri-containerd-455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4.scope - libcontainer container 455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4. Oct 8 19:43:35.640860 kubelet[2728]: E1008 19:43:35.640821 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.640860 kubelet[2728]: W1008 19:43:35.640850 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.641025 kubelet[2728]: E1008 19:43:35.640874 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.642102 kubelet[2728]: E1008 19:43:35.642078 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.642218 kubelet[2728]: W1008 19:43:35.642199 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.642253 kubelet[2728]: E1008 19:43:35.642228 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.642639 kubelet[2728]: E1008 19:43:35.642621 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.642639 kubelet[2728]: W1008 19:43:35.642636 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.642705 kubelet[2728]: E1008 19:43:35.642667 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.643060 kubelet[2728]: E1008 19:43:35.643030 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.643060 kubelet[2728]: W1008 19:43:35.643058 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.643144 kubelet[2728]: E1008 19:43:35.643126 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.644356 kubelet[2728]: E1008 19:43:35.643388 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.644356 kubelet[2728]: W1008 19:43:35.643405 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.644356 kubelet[2728]: E1008 19:43:35.643494 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.644356 kubelet[2728]: E1008 19:43:35.643781 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.644356 kubelet[2728]: W1008 19:43:35.643790 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.644356 kubelet[2728]: E1008 19:43:35.643838 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.644356 kubelet[2728]: E1008 19:43:35.644017 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.644356 kubelet[2728]: W1008 19:43:35.644025 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.644356 kubelet[2728]: E1008 19:43:35.644037 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.644356 kubelet[2728]: E1008 19:43:35.644259 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.644660 kubelet[2728]: W1008 19:43:35.644268 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.644660 kubelet[2728]: E1008 19:43:35.644512 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.644660 kubelet[2728]: E1008 19:43:35.644598 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.644660 kubelet[2728]: W1008 19:43:35.644606 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.644660 kubelet[2728]: E1008 19:43:35.644622 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.645432 kubelet[2728]: E1008 19:43:35.645406 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.645432 kubelet[2728]: W1008 19:43:35.645424 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.645548 kubelet[2728]: E1008 19:43:35.645441 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.646221 kubelet[2728]: E1008 19:43:35.646187 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.646221 kubelet[2728]: W1008 19:43:35.646205 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.646696 kubelet[2728]: E1008 19:43:35.646675 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.646696 kubelet[2728]: W1008 19:43:35.646691 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.646787 kubelet[2728]: E1008 19:43:35.646703 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.646939 kubelet[2728]: E1008 19:43:35.646903 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.647495 kubelet[2728]: E1008 19:43:35.647462 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.647495 kubelet[2728]: W1008 19:43:35.647483 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.647584 kubelet[2728]: E1008 19:43:35.647510 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.648223 kubelet[2728]: E1008 19:43:35.648074 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.648223 kubelet[2728]: W1008 19:43:35.648093 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.650673 kubelet[2728]: E1008 19:43:35.650549 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.651019 kubelet[2728]: E1008 19:43:35.650879 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.651019 kubelet[2728]: W1008 19:43:35.650901 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.651019 kubelet[2728]: E1008 19:43:35.650921 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.651198 kubelet[2728]: E1008 19:43:35.651113 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.651198 kubelet[2728]: W1008 19:43:35.651127 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.651198 kubelet[2728]: E1008 19:43:35.651197 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.652614 kubelet[2728]: E1008 19:43:35.652454 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.652614 kubelet[2728]: W1008 19:43:35.652472 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.653384 kubelet[2728]: E1008 19:43:35.652817 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.653384 kubelet[2728]: W1008 19:43:35.652835 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.653384 kubelet[2728]: E1008 19:43:35.652912 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.653384 kubelet[2728]: E1008 19:43:35.652936 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.654270 kubelet[2728]: E1008 19:43:35.654242 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.654270 kubelet[2728]: W1008 19:43:35.654260 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.654425 kubelet[2728]: E1008 19:43:35.654404 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.655379 kubelet[2728]: E1008 19:43:35.655315 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.655379 kubelet[2728]: W1008 19:43:35.655332 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.655500 kubelet[2728]: E1008 19:43:35.655399 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.656606 kubelet[2728]: E1008 19:43:35.655962 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.656606 kubelet[2728]: W1008 19:43:35.655979 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.656606 kubelet[2728]: E1008 19:43:35.656157 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.656606 kubelet[2728]: E1008 19:43:35.656533 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.656606 kubelet[2728]: W1008 19:43:35.656542 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.656743 kubelet[2728]: E1008 19:43:35.656716 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.657250 kubelet[2728]: E1008 19:43:35.657231 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.657250 kubelet[2728]: W1008 19:43:35.657248 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.657407 kubelet[2728]: E1008 19:43:35.657388 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.658098 kubelet[2728]: E1008 19:43:35.658081 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.658098 kubelet[2728]: W1008 19:43:35.658096 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.658227 kubelet[2728]: E1008 19:43:35.658210 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.658350 kubelet[2728]: E1008 19:43:35.658335 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.658350 kubelet[2728]: W1008 19:43:35.658349 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.658427 kubelet[2728]: E1008 19:43:35.658386 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.664848 kubelet[2728]: E1008 19:43:35.664811 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:35.664848 kubelet[2728]: W1008 19:43:35.664840 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:35.665017 kubelet[2728]: E1008 19:43:35.664860 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:35.701265 containerd[1464]: time="2024-10-08T19:43:35.701172780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ts68d,Uid:8a9fdce0-13f6-4787-92e6-a78e36379ff2,Namespace:calico-system,Attempt:0,} returns sandbox id \"455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4\"" Oct 8 19:43:37.523553 containerd[1464]: time="2024-10-08T19:43:37.523423361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:37.525045 containerd[1464]: time="2024-10-08T19:43:37.524940545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:43:37.526009 containerd[1464]: time="2024-10-08T19:43:37.525911640Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:37.528082 containerd[1464]: time="2024-10-08T19:43:37.528011593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:37.529254 containerd[1464]: time="2024-10-08T19:43:37.529212932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.906671282s" Oct 8 19:43:37.529254 containerd[1464]: time="2024-10-08T19:43:37.529250853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:43:37.532443 containerd[1464]: time="2024-10-08T19:43:37.532222299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:43:37.556389 containerd[1464]: time="2024-10-08T19:43:37.556326279Z" level=info msg="CreateContainer within sandbox \"9febe8b68939464a641d5c403e68969a5cb0aa62f4ca80360da9ade711a9bb3a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:43:37.579182 containerd[1464]: time="2024-10-08T19:43:37.577739336Z" level=info msg="CreateContainer within sandbox \"9febe8b68939464a641d5c403e68969a5cb0aa62f4ca80360da9ade711a9bb3a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0a10bf6a3640a93ac1cacee5412f20f9f5f4f2bcfc8e66835174cf122823b307\"" Oct 8 19:43:37.579182 containerd[1464]: time="2024-10-08T19:43:37.578454587Z" level=info msg="StartContainer for \"0a10bf6a3640a93ac1cacee5412f20f9f5f4f2bcfc8e66835174cf122823b307\"" Oct 8 19:43:37.627668 systemd[1]: Started cri-containerd-0a10bf6a3640a93ac1cacee5412f20f9f5f4f2bcfc8e66835174cf122823b307.scope - libcontainer container 0a10bf6a3640a93ac1cacee5412f20f9f5f4f2bcfc8e66835174cf122823b307. Oct 8 19:43:37.683249 containerd[1464]: time="2024-10-08T19:43:37.682544106Z" level=info msg="StartContainer for \"0a10bf6a3640a93ac1cacee5412f20f9f5f4f2bcfc8e66835174cf122823b307\" returns successfully" Oct 8 19:43:37.797515 kubelet[2728]: E1008 19:43:37.796932 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsxhd" podUID="ad896521-c957-4f2e-a42a-e7387295bb9d" Oct 8 19:43:37.959539 kubelet[2728]: E1008 19:43:37.959340 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.959539 kubelet[2728]: W1008 19:43:37.959404 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.959539 kubelet[2728]: E1008 19:43:37.959428 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.959948 kubelet[2728]: E1008 19:43:37.959701 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.959948 kubelet[2728]: W1008 19:43:37.959712 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.959948 kubelet[2728]: E1008 19:43:37.959723 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.960575 kubelet[2728]: E1008 19:43:37.960264 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.960575 kubelet[2728]: W1008 19:43:37.960278 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.960575 kubelet[2728]: E1008 19:43:37.960291 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.961530 kubelet[2728]: E1008 19:43:37.960710 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.961530 kubelet[2728]: W1008 19:43:37.960722 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.961530 kubelet[2728]: E1008 19:43:37.960758 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.963671 kubelet[2728]: E1008 19:43:37.963640 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.963883 kubelet[2728]: W1008 19:43:37.963759 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.963883 kubelet[2728]: E1008 19:43:37.963785 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.965039 kubelet[2728]: E1008 19:43:37.964348 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.965039 kubelet[2728]: W1008 19:43:37.964385 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.965039 kubelet[2728]: E1008 19:43:37.964402 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.966559 kubelet[2728]: E1008 19:43:37.965479 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.966559 kubelet[2728]: W1008 19:43:37.965504 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.966559 kubelet[2728]: E1008 19:43:37.965519 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.966559 kubelet[2728]: E1008 19:43:37.965790 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.966559 kubelet[2728]: W1008 19:43:37.965847 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.966559 kubelet[2728]: E1008 19:43:37.965860 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.966559 kubelet[2728]: E1008 19:43:37.966041 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.966559 kubelet[2728]: W1008 19:43:37.966049 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.966559 kubelet[2728]: E1008 19:43:37.966058 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.966559 kubelet[2728]: E1008 19:43:37.966175 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.966896 kubelet[2728]: W1008 19:43:37.966182 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.966896 kubelet[2728]: E1008 19:43:37.966189 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.966896 kubelet[2728]: E1008 19:43:37.966313 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.966896 kubelet[2728]: W1008 19:43:37.966321 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.966896 kubelet[2728]: E1008 19:43:37.966329 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.966896 kubelet[2728]: E1008 19:43:37.966462 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.966896 kubelet[2728]: W1008 19:43:37.966470 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.966896 kubelet[2728]: E1008 19:43:37.966477 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.966896 kubelet[2728]: E1008 19:43:37.966610 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.966896 kubelet[2728]: W1008 19:43:37.966618 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.967139 kubelet[2728]: E1008 19:43:37.966625 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.967139 kubelet[2728]: E1008 19:43:37.966775 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.967139 kubelet[2728]: W1008 19:43:37.966783 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.967139 kubelet[2728]: E1008 19:43:37.966792 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.967139 kubelet[2728]: E1008 19:43:37.966929 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.967139 kubelet[2728]: W1008 19:43:37.966937 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.967139 kubelet[2728]: E1008 19:43:37.966945 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.967305 kubelet[2728]: E1008 19:43:37.967230 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.967305 kubelet[2728]: W1008 19:43:37.967245 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.967305 kubelet[2728]: E1008 19:43:37.967257 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.967664 kubelet[2728]: E1008 19:43:37.967634 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.967664 kubelet[2728]: W1008 19:43:37.967659 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.968476 kubelet[2728]: E1008 19:43:37.967674 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.968887 kubelet[2728]: E1008 19:43:37.968861 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.968887 kubelet[2728]: W1008 19:43:37.968884 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.969128 kubelet[2728]: E1008 19:43:37.968908 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.971609 kubelet[2728]: E1008 19:43:37.971507 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.971609 kubelet[2728]: W1008 19:43:37.971535 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.971609 kubelet[2728]: E1008 19:43:37.971566 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.973653 kubelet[2728]: E1008 19:43:37.973614 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.973653 kubelet[2728]: W1008 19:43:37.973645 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.973909 kubelet[2728]: E1008 19:43:37.973723 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.973909 kubelet[2728]: E1008 19:43:37.973847 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.973909 kubelet[2728]: W1008 19:43:37.973856 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.974008 kubelet[2728]: E1008 19:43:37.973925 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.974051 kubelet[2728]: E1008 19:43:37.974033 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.974051 kubelet[2728]: W1008 19:43:37.974047 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.974164 kubelet[2728]: E1008 19:43:37.974122 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.974253 kubelet[2728]: E1008 19:43:37.974221 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.974253 kubelet[2728]: W1008 19:43:37.974252 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.974326 kubelet[2728]: E1008 19:43:37.974278 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.974611 kubelet[2728]: E1008 19:43:37.974588 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.974611 kubelet[2728]: W1008 19:43:37.974603 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.974692 kubelet[2728]: E1008 19:43:37.974625 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.975169 kubelet[2728]: E1008 19:43:37.975144 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.975169 kubelet[2728]: W1008 19:43:37.975166 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.975266 kubelet[2728]: E1008 19:43:37.975188 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.977587 kubelet[2728]: E1008 19:43:37.977552 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.977587 kubelet[2728]: W1008 19:43:37.977581 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.977847 kubelet[2728]: E1008 19:43:37.977768 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.977917 kubelet[2728]: E1008 19:43:37.977884 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.977917 kubelet[2728]: W1008 19:43:37.977895 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.978000 kubelet[2728]: E1008 19:43:37.977983 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.978124 kubelet[2728]: E1008 19:43:37.978107 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.978124 kubelet[2728]: W1008 19:43:37.978122 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.978261 kubelet[2728]: E1008 19:43:37.978149 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.978485 kubelet[2728]: E1008 19:43:37.978458 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.978485 kubelet[2728]: W1008 19:43:37.978478 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.978567 kubelet[2728]: E1008 19:43:37.978495 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.979161 kubelet[2728]: E1008 19:43:37.978921 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.979161 kubelet[2728]: W1008 19:43:37.978938 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.979161 kubelet[2728]: E1008 19:43:37.978957 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.980673 kubelet[2728]: E1008 19:43:37.979452 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.980828 kubelet[2728]: W1008 19:43:37.980801 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.980920 kubelet[2728]: E1008 19:43:37.980907 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.981319 kubelet[2728]: E1008 19:43:37.981268 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.981319 kubelet[2728]: W1008 19:43:37.981286 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.981319 kubelet[2728]: E1008 19:43:37.981312 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:37.982017 kubelet[2728]: E1008 19:43:37.981525 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:37.982017 kubelet[2728]: W1008 19:43:37.981534 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:37.982017 kubelet[2728]: E1008 19:43:37.981544 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.913841 kubelet[2728]: I1008 19:43:38.913560 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:43:38.915940 containerd[1464]: time="2024-10-08T19:43:38.915880151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:38.917692 containerd[1464]: time="2024-10-08T19:43:38.917640738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:43:38.919118 containerd[1464]: time="2024-10-08T19:43:38.919068000Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:38.923265 containerd[1464]: time="2024-10-08T19:43:38.923208264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:38.924669 containerd[1464]: time="2024-10-08T19:43:38.924614206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.392341985s" Oct 8 19:43:38.924669 containerd[1464]: time="2024-10-08T19:43:38.924661246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:43:38.927216 containerd[1464]: time="2024-10-08T19:43:38.927011763Z" level=info msg="CreateContainer within sandbox \"455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:43:38.945195 containerd[1464]: time="2024-10-08T19:43:38.945115762Z" level=info msg="CreateContainer within sandbox \"455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262\"" Oct 8 19:43:38.948424 containerd[1464]: time="2024-10-08T19:43:38.946928950Z" level=info msg="StartContainer for \"fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262\"" Oct 8 19:43:38.972879 kubelet[2728]: E1008 19:43:38.972753 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.972879 kubelet[2728]: W1008 19:43:38.972777 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.972879 kubelet[2728]: E1008 19:43:38.972798 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.973109 kubelet[2728]: E1008 19:43:38.973098 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.973220 kubelet[2728]: W1008 19:43:38.973160 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.973286 kubelet[2728]: E1008 19:43:38.973275 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.973668 kubelet[2728]: E1008 19:43:38.973651 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.973846 kubelet[2728]: W1008 19:43:38.973832 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.973915 kubelet[2728]: E1008 19:43:38.973903 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.974512 kubelet[2728]: E1008 19:43:38.974497 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.974747 kubelet[2728]: W1008 19:43:38.974719 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.974813 kubelet[2728]: E1008 19:43:38.974803 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.975580 kubelet[2728]: E1008 19:43:38.975484 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.975580 kubelet[2728]: W1008 19:43:38.975499 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.975580 kubelet[2728]: E1008 19:43:38.975511 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.975744 kubelet[2728]: E1008 19:43:38.975732 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.975889 kubelet[2728]: W1008 19:43:38.975876 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.976032 kubelet[2728]: E1008 19:43:38.976019 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.976566 kubelet[2728]: E1008 19:43:38.976453 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.976566 kubelet[2728]: W1008 19:43:38.976466 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.976566 kubelet[2728]: E1008 19:43:38.976477 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.977236 kubelet[2728]: E1008 19:43:38.976957 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.977236 kubelet[2728]: W1008 19:43:38.976972 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.977236 kubelet[2728]: E1008 19:43:38.976983 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.978003 kubelet[2728]: E1008 19:43:38.977747 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.978003 kubelet[2728]: W1008 19:43:38.977860 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.978003 kubelet[2728]: E1008 19:43:38.977874 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.979441 kubelet[2728]: E1008 19:43:38.978879 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.979441 kubelet[2728]: W1008 19:43:38.978896 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.979441 kubelet[2728]: E1008 19:43:38.978916 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.979634 kubelet[2728]: E1008 19:43:38.979620 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.979693 kubelet[2728]: W1008 19:43:38.979682 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.979743 kubelet[2728]: E1008 19:43:38.979733 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.980158 kubelet[2728]: E1008 19:43:38.980144 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.980475 kubelet[2728]: W1008 19:43:38.980345 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.980475 kubelet[2728]: E1008 19:43:38.980377 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.980939 kubelet[2728]: E1008 19:43:38.980785 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.980939 kubelet[2728]: W1008 19:43:38.980798 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.980939 kubelet[2728]: E1008 19:43:38.980809 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.981159 kubelet[2728]: E1008 19:43:38.981073 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.981159 kubelet[2728]: W1008 19:43:38.981084 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.981159 kubelet[2728]: E1008 19:43:38.981094 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.981574 kubelet[2728]: E1008 19:43:38.981504 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.981574 kubelet[2728]: W1008 19:43:38.981520 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.981574 kubelet[2728]: E1008 19:43:38.981531 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.982194 kubelet[2728]: E1008 19:43:38.982031 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.982194 kubelet[2728]: W1008 19:43:38.982045 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.982194 kubelet[2728]: E1008 19:43:38.982056 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.982482 kubelet[2728]: E1008 19:43:38.982333 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.982482 kubelet[2728]: W1008 19:43:38.982345 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.982482 kubelet[2728]: E1008 19:43:38.982356 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.983046 kubelet[2728]: E1008 19:43:38.982865 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.983046 kubelet[2728]: W1008 19:43:38.982879 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.983046 kubelet[2728]: E1008 19:43:38.982896 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.983678 kubelet[2728]: E1008 19:43:38.983565 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.983678 kubelet[2728]: W1008 19:43:38.983579 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.983678 kubelet[2728]: E1008 19:43:38.983597 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.983979 kubelet[2728]: E1008 19:43:38.983891 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.983979 kubelet[2728]: W1008 19:43:38.983903 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.983979 kubelet[2728]: E1008 19:43:38.983913 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.984385 kubelet[2728]: E1008 19:43:38.984230 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.984385 kubelet[2728]: W1008 19:43:38.984241 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.984385 kubelet[2728]: E1008 19:43:38.984254 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.984972 kubelet[2728]: E1008 19:43:38.984956 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.985311 kubelet[2728]: W1008 19:43:38.985044 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.985311 kubelet[2728]: E1008 19:43:38.985062 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.985623 kubelet[2728]: E1008 19:43:38.985609 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.985701 kubelet[2728]: W1008 19:43:38.985689 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.985751 kubelet[2728]: E1008 19:43:38.985742 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.985965 kubelet[2728]: E1008 19:43:38.985954 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.986110 kubelet[2728]: W1008 19:43:38.986030 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.986110 kubelet[2728]: E1008 19:43:38.986047 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.986264 kubelet[2728]: E1008 19:43:38.986254 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.987497 kubelet[2728]: W1008 19:43:38.986398 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.987497 kubelet[2728]: E1008 19:43:38.986415 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.987773 kubelet[2728]: E1008 19:43:38.987758 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.988920 kubelet[2728]: W1008 19:43:38.987848 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.989136 kubelet[2728]: E1008 19:43:38.989119 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.989292 kubelet[2728]: W1008 19:43:38.989194 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.989292 kubelet[2728]: E1008 19:43:38.989213 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.989623 kubelet[2728]: E1008 19:43:38.989476 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.989623 kubelet[2728]: W1008 19:43:38.989488 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.989623 kubelet[2728]: E1008 19:43:38.989498 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.990008 kubelet[2728]: E1008 19:43:38.989770 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.990008 kubelet[2728]: W1008 19:43:38.989783 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.990008 kubelet[2728]: E1008 19:43:38.989793 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.990609 kubelet[2728]: E1008 19:43:38.990285 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.990609 kubelet[2728]: W1008 19:43:38.990300 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.990609 kubelet[2728]: E1008 19:43:38.990311 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.990858 kubelet[2728]: E1008 19:43:38.990845 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.990922 kubelet[2728]: W1008 19:43:38.990911 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.991091 kubelet[2728]: E1008 19:43:38.990975 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.991091 kubelet[2728]: E1008 19:43:38.991006 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.991292 kubelet[2728]: E1008 19:43:38.991279 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.993196 kubelet[2728]: W1008 19:43:38.991350 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.993196 kubelet[2728]: E1008 19:43:38.992419 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:38.994052 kubelet[2728]: E1008 19:43:38.994036 2728 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:43:38.994441 kubelet[2728]: W1008 19:43:38.994425 2728 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:43:38.994789 kubelet[2728]: E1008 19:43:38.994511 2728 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:43:39.004618 systemd[1]: Started cri-containerd-fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262.scope - libcontainer container fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262. Oct 8 19:43:39.045677 containerd[1464]: time="2024-10-08T19:43:39.045529777Z" level=info msg="StartContainer for \"fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262\" returns successfully" Oct 8 19:43:39.070050 systemd[1]: cri-containerd-fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262.scope: Deactivated successfully. Oct 8 19:43:39.102836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262-rootfs.mount: Deactivated successfully. Oct 8 19:43:39.218708 containerd[1464]: time="2024-10-08T19:43:39.218512391Z" level=info msg="shim disconnected" id=fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262 namespace=k8s.io Oct 8 19:43:39.218708 containerd[1464]: time="2024-10-08T19:43:39.218579672Z" level=warning msg="cleaning up after shim disconnected" id=fc355a3a026d473e57b34cc7328401e25f22f9185d5805bb01f4fc96db22b262 namespace=k8s.io Oct 8 19:43:39.218708 containerd[1464]: time="2024-10-08T19:43:39.218590952Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:43:39.797934 kubelet[2728]: E1008 19:43:39.797231 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsxhd" podUID="ad896521-c957-4f2e-a42a-e7387295bb9d" Oct 8 19:43:39.921148 containerd[1464]: time="2024-10-08T19:43:39.921083050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:43:39.945651 kubelet[2728]: I1008 19:43:39.945015 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cdfd8657c-kbw8v" podStartSLOduration=3.033725575 podStartE2EDuration="4.944994852s" podCreationTimestamp="2024-10-08 19:43:35 +0000 UTC" firstStartedPulling="2024-10-08 19:43:35.619847685 +0000 UTC m=+22.953320229" lastFinishedPulling="2024-10-08 19:43:37.531116922 +0000 UTC m=+24.864589506" observedRunningTime="2024-10-08 19:43:37.930211445 +0000 UTC m=+25.263684029" watchObservedRunningTime="2024-10-08 19:43:39.944994852 +0000 UTC m=+27.278467396" Oct 8 19:43:41.796892 kubelet[2728]: E1008 19:43:41.796786 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsxhd" podUID="ad896521-c957-4f2e-a42a-e7387295bb9d" Oct 8 19:43:42.471417 containerd[1464]: time="2024-10-08T19:43:42.471345454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:42.472942 containerd[1464]: time="2024-10-08T19:43:42.472889276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:43:42.475403 containerd[1464]: time="2024-10-08T19:43:42.474314176Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:42.476872 containerd[1464]: time="2024-10-08T19:43:42.476827772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:42.478062 containerd[1464]: time="2024-10-08T19:43:42.478007909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 2.556862098s" Oct 8 19:43:42.478062 containerd[1464]: time="2024-10-08T19:43:42.478059910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:43:42.482731 containerd[1464]: time="2024-10-08T19:43:42.482684296Z" level=info msg="CreateContainer within sandbox \"455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:43:42.499954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843897670.mount: Deactivated successfully. Oct 8 19:43:42.503523 containerd[1464]: time="2024-10-08T19:43:42.503198948Z" level=info msg="CreateContainer within sandbox \"455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b\"" Oct 8 19:43:42.504133 containerd[1464]: time="2024-10-08T19:43:42.503792116Z" level=info msg="StartContainer for \"747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b\"" Oct 8 19:43:42.546655 systemd[1]: Started cri-containerd-747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b.scope - libcontainer container 747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b. Oct 8 19:43:42.584845 containerd[1464]: time="2024-10-08T19:43:42.583272448Z" level=info msg="StartContainer for \"747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b\" returns successfully" Oct 8 19:43:43.340250 containerd[1464]: time="2024-10-08T19:43:43.340172495Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:43:43.344063 systemd[1]: cri-containerd-747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b.scope: Deactivated successfully. Oct 8 19:43:43.378388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b-rootfs.mount: Deactivated successfully. Oct 8 19:43:43.389423 kubelet[2728]: I1008 19:43:43.386619 2728 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:43:43.419226 kubelet[2728]: I1008 19:43:43.419019 2728 topology_manager.go:215] "Topology Admit Handler" podUID="76af55ff-f2b4-42c0-ae6e-0c08786cce40" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gf7n9" Oct 8 19:43:43.429337 kubelet[2728]: I1008 19:43:43.428405 2728 topology_manager.go:215] "Topology Admit Handler" podUID="8892696d-63f0-457d-bddb-381ce2acd1db" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lgxnr" Oct 8 19:43:43.434563 kubelet[2728]: I1008 19:43:43.434420 2728 topology_manager.go:215] "Topology Admit Handler" podUID="69f1a87a-d603-47b1-9d39-88a882952364" podNamespace="calico-system" podName="calico-kube-controllers-54f6669d9f-lvf2t" Oct 8 19:43:43.440912 systemd[1]: Created slice kubepods-burstable-pod76af55ff_f2b4_42c0_ae6e_0c08786cce40.slice - libcontainer container kubepods-burstable-pod76af55ff_f2b4_42c0_ae6e_0c08786cce40.slice. Oct 8 19:43:43.454758 systemd[1]: Created slice kubepods-burstable-pod8892696d_63f0_457d_bddb_381ce2acd1db.slice - libcontainer container kubepods-burstable-pod8892696d_63f0_457d_bddb_381ce2acd1db.slice. Oct 8 19:43:43.463504 systemd[1]: Created slice kubepods-besteffort-pod69f1a87a_d603_47b1_9d39_88a882952364.slice - libcontainer container kubepods-besteffort-pod69f1a87a_d603_47b1_9d39_88a882952364.slice. Oct 8 19:43:43.519215 kubelet[2728]: I1008 19:43:43.519165 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mhc\" (UniqueName: \"kubernetes.io/projected/8892696d-63f0-457d-bddb-381ce2acd1db-kube-api-access-w9mhc\") pod \"coredns-7db6d8ff4d-lgxnr\" (UID: \"8892696d-63f0-457d-bddb-381ce2acd1db\") " pod="kube-system/coredns-7db6d8ff4d-lgxnr" Oct 8 19:43:43.520171 kubelet[2728]: I1008 19:43:43.520064 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76af55ff-f2b4-42c0-ae6e-0c08786cce40-config-volume\") pod \"coredns-7db6d8ff4d-gf7n9\" (UID: \"76af55ff-f2b4-42c0-ae6e-0c08786cce40\") " pod="kube-system/coredns-7db6d8ff4d-gf7n9" Oct 8 19:43:43.522138 kubelet[2728]: I1008 19:43:43.521604 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69f1a87a-d603-47b1-9d39-88a882952364-tigera-ca-bundle\") pod \"calico-kube-controllers-54f6669d9f-lvf2t\" (UID: \"69f1a87a-d603-47b1-9d39-88a882952364\") " pod="calico-system/calico-kube-controllers-54f6669d9f-lvf2t" Oct 8 19:43:43.522138 kubelet[2728]: I1008 19:43:43.521913 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4prvw\" (UniqueName: \"kubernetes.io/projected/76af55ff-f2b4-42c0-ae6e-0c08786cce40-kube-api-access-4prvw\") pod \"coredns-7db6d8ff4d-gf7n9\" (UID: \"76af55ff-f2b4-42c0-ae6e-0c08786cce40\") " pod="kube-system/coredns-7db6d8ff4d-gf7n9" Oct 8 19:43:43.522138 kubelet[2728]: I1008 19:43:43.521978 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8892696d-63f0-457d-bddb-381ce2acd1db-config-volume\") pod \"coredns-7db6d8ff4d-lgxnr\" (UID: \"8892696d-63f0-457d-bddb-381ce2acd1db\") " pod="kube-system/coredns-7db6d8ff4d-lgxnr" Oct 8 19:43:43.522138 kubelet[2728]: I1008 19:43:43.522017 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtwl\" (UniqueName: \"kubernetes.io/projected/69f1a87a-d603-47b1-9d39-88a882952364-kube-api-access-kmtwl\") pod \"calico-kube-controllers-54f6669d9f-lvf2t\" (UID: \"69f1a87a-d603-47b1-9d39-88a882952364\") " pod="calico-system/calico-kube-controllers-54f6669d9f-lvf2t" Oct 8 19:43:43.527962 containerd[1464]: time="2024-10-08T19:43:43.527632434Z" level=info msg="shim disconnected" id=747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b namespace=k8s.io Oct 8 19:43:43.527962 containerd[1464]: time="2024-10-08T19:43:43.527701875Z" level=warning msg="cleaning up after shim disconnected" id=747d184ed1cb62fac2904adcc1740a94a62649cc51c6d8a5eee76e868091cf9b namespace=k8s.io Oct 8 19:43:43.527962 containerd[1464]: time="2024-10-08T19:43:43.527712355Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:43:43.749604 containerd[1464]: time="2024-10-08T19:43:43.749390212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gf7n9,Uid:76af55ff-f2b4-42c0-ae6e-0c08786cce40,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:43.761214 containerd[1464]: time="2024-10-08T19:43:43.760942893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgxnr,Uid:8892696d-63f0-457d-bddb-381ce2acd1db,Namespace:kube-system,Attempt:0,}" Oct 8 19:43:43.770126 containerd[1464]: time="2024-10-08T19:43:43.770060940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f6669d9f-lvf2t,Uid:69f1a87a-d603-47b1-9d39-88a882952364,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:43.820015 systemd[1]: Created slice kubepods-besteffort-podad896521_c957_4f2e_a42a_e7387295bb9d.slice - libcontainer container kubepods-besteffort-podad896521_c957_4f2e_a42a_e7387295bb9d.slice. Oct 8 19:43:43.832170 containerd[1464]: time="2024-10-08T19:43:43.830726548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsxhd,Uid:ad896521-c957-4f2e-a42a-e7387295bb9d,Namespace:calico-system,Attempt:0,}" Oct 8 19:43:43.950690 containerd[1464]: time="2024-10-08T19:43:43.950647983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:43:44.021749 containerd[1464]: time="2024-10-08T19:43:44.021562488Z" level=error msg="Failed to destroy network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.025804 containerd[1464]: time="2024-10-08T19:43:44.025714305Z" level=error msg="encountered an error cleaning up failed sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.026192 containerd[1464]: time="2024-10-08T19:43:44.026131151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgxnr,Uid:8892696d-63f0-457d-bddb-381ce2acd1db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.028352 kubelet[2728]: E1008 19:43:44.026999 2728 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.028352 kubelet[2728]: E1008 19:43:44.027127 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lgxnr" Oct 8 19:43:44.028352 kubelet[2728]: E1008 19:43:44.027153 2728 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lgxnr" Oct 8 19:43:44.028598 kubelet[2728]: E1008 19:43:44.027216 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lgxnr_kube-system(8892696d-63f0-457d-bddb-381ce2acd1db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lgxnr_kube-system(8892696d-63f0-457d-bddb-381ce2acd1db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lgxnr" podUID="8892696d-63f0-457d-bddb-381ce2acd1db" Oct 8 19:43:44.045687 containerd[1464]: time="2024-10-08T19:43:44.045635978Z" level=error msg="Failed to destroy network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.046134 containerd[1464]: time="2024-10-08T19:43:44.046102904Z" level=error msg="encountered an error cleaning up failed sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.046259 containerd[1464]: time="2024-10-08T19:43:44.046235786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsxhd,Uid:ad896521-c957-4f2e-a42a-e7387295bb9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.046559 kubelet[2728]: E1008 19:43:44.046517 2728 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.046762 containerd[1464]: time="2024-10-08T19:43:44.046734473Z" level=error msg="Failed to destroy network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.047206 containerd[1464]: time="2024-10-08T19:43:44.047175479Z" level=error msg="encountered an error cleaning up failed sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.047354 kubelet[2728]: E1008 19:43:44.046958 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rsxhd" Oct 8 19:43:44.047354 kubelet[2728]: E1008 19:43:44.047298 2728 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rsxhd" Oct 8 19:43:44.047530 kubelet[2728]: E1008 19:43:44.047498 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rsxhd_calico-system(ad896521-c957-4f2e-a42a-e7387295bb9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rsxhd_calico-system(ad896521-c957-4f2e-a42a-e7387295bb9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rsxhd" podUID="ad896521-c957-4f2e-a42a-e7387295bb9d" Oct 8 19:43:44.047744 containerd[1464]: time="2024-10-08T19:43:44.047687926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f6669d9f-lvf2t,Uid:69f1a87a-d603-47b1-9d39-88a882952364,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.048116 kubelet[2728]: E1008 19:43:44.047990 2728 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.048116 kubelet[2728]: E1008 19:43:44.048030 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f6669d9f-lvf2t" Oct 8 19:43:44.048116 kubelet[2728]: E1008 19:43:44.048047 2728 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54f6669d9f-lvf2t" Oct 8 19:43:44.048220 kubelet[2728]: E1008 19:43:44.048077 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54f6669d9f-lvf2t_calico-system(69f1a87a-d603-47b1-9d39-88a882952364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54f6669d9f-lvf2t_calico-system(69f1a87a-d603-47b1-9d39-88a882952364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f6669d9f-lvf2t" podUID="69f1a87a-d603-47b1-9d39-88a882952364" Oct 8 19:43:44.054306 containerd[1464]: time="2024-10-08T19:43:44.053798730Z" level=error msg="Failed to destroy network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.054306 containerd[1464]: time="2024-10-08T19:43:44.054163655Z" level=error msg="encountered an error cleaning up failed sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.054306 containerd[1464]: time="2024-10-08T19:43:44.054216095Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gf7n9,Uid:76af55ff-f2b4-42c0-ae6e-0c08786cce40,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.055611 kubelet[2728]: E1008 19:43:44.054848 2728 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:44.055729 kubelet[2728]: E1008 19:43:44.055645 2728 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gf7n9" Oct 8 19:43:44.055729 kubelet[2728]: E1008 19:43:44.055673 2728 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gf7n9" Oct 8 19:43:44.055812 kubelet[2728]: E1008 19:43:44.055715 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gf7n9_kube-system(76af55ff-f2b4-42c0-ae6e-0c08786cce40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gf7n9_kube-system(76af55ff-f2b4-42c0-ae6e-0c08786cce40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gf7n9" podUID="76af55ff-f2b4-42c0-ae6e-0c08786cce40" Oct 8 19:43:44.651790 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353-shm.mount: Deactivated successfully. Oct 8 19:43:44.651947 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e-shm.mount: Deactivated successfully. Oct 8 19:43:44.950413 kubelet[2728]: I1008 19:43:44.947755 2728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:43:44.950826 containerd[1464]: time="2024-10-08T19:43:44.948904677Z" level=info msg="StopPodSandbox for \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\"" Oct 8 19:43:44.950826 containerd[1464]: time="2024-10-08T19:43:44.949228201Z" level=info msg="Ensure that sandbox af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697 in task-service has been cleanup successfully" Oct 8 19:43:44.955965 kubelet[2728]: I1008 19:43:44.951798 2728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:43:44.956094 containerd[1464]: time="2024-10-08T19:43:44.952550567Z" level=info msg="StopPodSandbox for \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\"" Oct 8 19:43:44.957234 containerd[1464]: time="2024-10-08T19:43:44.957162870Z" level=info msg="Ensure that sandbox db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5 in task-service has been cleanup successfully" Oct 8 19:43:44.958075 kubelet[2728]: I1008 19:43:44.958047 2728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:43:44.960135 containerd[1464]: time="2024-10-08T19:43:44.959907668Z" level=info msg="StopPodSandbox for \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\"" Oct 8 19:43:44.960515 containerd[1464]: time="2024-10-08T19:43:44.960487436Z" level=info msg="Ensure that sandbox bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353 in task-service has been cleanup successfully" Oct 8 19:43:44.962791 kubelet[2728]: I1008 19:43:44.962760 2728 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:43:44.963459 containerd[1464]: time="2024-10-08T19:43:44.963310314Z" level=info msg="StopPodSandbox for \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\"" Oct 8 19:43:44.964642 containerd[1464]: time="2024-10-08T19:43:44.964451530Z" level=info msg="Ensure that sandbox ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e in task-service has been cleanup successfully" Oct 8 19:43:45.015115 containerd[1464]: time="2024-10-08T19:43:45.015060980Z" level=error msg="StopPodSandbox for \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\" failed" error="failed to destroy network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:45.015647 kubelet[2728]: E1008 19:43:45.015467 2728 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:43:45.015647 kubelet[2728]: E1008 19:43:45.015524 2728 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e"} Oct 8 19:43:45.015647 kubelet[2728]: E1008 19:43:45.015585 2728 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"76af55ff-f2b4-42c0-ae6e-0c08786cce40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:45.015647 kubelet[2728]: E1008 19:43:45.015612 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"76af55ff-f2b4-42c0-ae6e-0c08786cce40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gf7n9" podUID="76af55ff-f2b4-42c0-ae6e-0c08786cce40" Oct 8 19:43:45.028377 containerd[1464]: time="2024-10-08T19:43:45.028097995Z" level=error msg="StopPodSandbox for \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\" failed" error="failed to destroy network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:45.028875 kubelet[2728]: E1008 19:43:45.028662 2728 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:43:45.028875 kubelet[2728]: E1008 19:43:45.028723 2728 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5"} Oct 8 19:43:45.028875 kubelet[2728]: E1008 19:43:45.028758 2728 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad896521-c957-4f2e-a42a-e7387295bb9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:45.028875 kubelet[2728]: E1008 19:43:45.028782 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad896521-c957-4f2e-a42a-e7387295bb9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rsxhd" podUID="ad896521-c957-4f2e-a42a-e7387295bb9d" Oct 8 19:43:45.032117 containerd[1464]: time="2024-10-08T19:43:45.031894166Z" level=error msg="StopPodSandbox for \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\" failed" error="failed to destroy network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:45.033724 kubelet[2728]: E1008 19:43:45.033565 2728 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:43:45.033724 kubelet[2728]: E1008 19:43:45.033618 2728 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697"} Oct 8 19:43:45.033724 kubelet[2728]: E1008 19:43:45.033656 2728 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8892696d-63f0-457d-bddb-381ce2acd1db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:45.033724 kubelet[2728]: E1008 19:43:45.033682 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8892696d-63f0-457d-bddb-381ce2acd1db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lgxnr" podUID="8892696d-63f0-457d-bddb-381ce2acd1db" Oct 8 19:43:45.034322 containerd[1464]: time="2024-10-08T19:43:45.034280838Z" level=error msg="StopPodSandbox for \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\" failed" error="failed to destroy network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:43:45.034712 kubelet[2728]: E1008 19:43:45.034575 2728 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:43:45.034712 kubelet[2728]: E1008 19:43:45.034614 2728 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353"} Oct 8 19:43:45.034712 kubelet[2728]: E1008 19:43:45.034643 2728 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69f1a87a-d603-47b1-9d39-88a882952364\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:43:45.034712 kubelet[2728]: E1008 19:43:45.034666 2728 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69f1a87a-d603-47b1-9d39-88a882952364\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54f6669d9f-lvf2t" podUID="69f1a87a-d603-47b1-9d39-88a882952364" Oct 8 19:43:47.521593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2694188120.mount: Deactivated successfully. Oct 8 19:43:47.554802 containerd[1464]: time="2024-10-08T19:43:47.553664722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.554802 containerd[1464]: time="2024-10-08T19:43:47.554405452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:43:47.555717 containerd[1464]: time="2024-10-08T19:43:47.555660468Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.558411 containerd[1464]: time="2024-10-08T19:43:47.558323863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:43:47.560032 containerd[1464]: time="2024-10-08T19:43:47.559972044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 3.609065698s" Oct 8 19:43:47.560032 containerd[1464]: time="2024-10-08T19:43:47.560017965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:43:47.579541 containerd[1464]: time="2024-10-08T19:43:47.579493337Z" level=info msg="CreateContainer within sandbox \"455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:43:47.596292 containerd[1464]: time="2024-10-08T19:43:47.596200154Z" level=info msg="CreateContainer within sandbox \"455b2e5bc2f714f754492a1a01ebb1f3c6e7b31a547bf968be139bdcf264f1b4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bac7f7c4ded82326647cd427bed1156e8b56957b963d1c339337f2c5c941d2e6\"" Oct 8 19:43:47.600374 containerd[1464]: time="2024-10-08T19:43:47.599045631Z" level=info msg="StartContainer for \"bac7f7c4ded82326647cd427bed1156e8b56957b963d1c339337f2c5c941d2e6\"" Oct 8 19:43:47.629619 systemd[1]: Started cri-containerd-bac7f7c4ded82326647cd427bed1156e8b56957b963d1c339337f2c5c941d2e6.scope - libcontainer container bac7f7c4ded82326647cd427bed1156e8b56957b963d1c339337f2c5c941d2e6. Oct 8 19:43:47.676549 containerd[1464]: time="2024-10-08T19:43:47.675855346Z" level=info msg="StartContainer for \"bac7f7c4ded82326647cd427bed1156e8b56957b963d1c339337f2c5c941d2e6\" returns successfully" Oct 8 19:43:47.837628 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:43:47.837974 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:43:48.008150 kubelet[2728]: I1008 19:43:48.008080 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ts68d" podStartSLOduration=1.150018409 podStartE2EDuration="13.00806065s" podCreationTimestamp="2024-10-08 19:43:35 +0000 UTC" firstStartedPulling="2024-10-08 19:43:35.704943842 +0000 UTC m=+23.038416346" lastFinishedPulling="2024-10-08 19:43:47.562986083 +0000 UTC m=+34.896458587" observedRunningTime="2024-10-08 19:43:48.006572631 +0000 UTC m=+35.340045215" watchObservedRunningTime="2024-10-08 19:43:48.00806065 +0000 UTC m=+35.341533194" Oct 8 19:43:55.797775 containerd[1464]: time="2024-10-08T19:43:55.797715830Z" level=info msg="StopPodSandbox for \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\"" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.881 [INFO][4056] k8s.go 608: Cleaning up netns ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.882 [INFO][4056] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" iface="eth0" netns="/var/run/netns/cni-9497358c-339a-d13b-8590-b8f9a7f004bd" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.883 [INFO][4056] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" iface="eth0" netns="/var/run/netns/cni-9497358c-339a-d13b-8590-b8f9a7f004bd" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.884 [INFO][4056] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" iface="eth0" netns="/var/run/netns/cni-9497358c-339a-d13b-8590-b8f9a7f004bd" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.884 [INFO][4056] k8s.go 615: Releasing IP address(es) ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.884 [INFO][4056] utils.go 188: Calico CNI releasing IP address ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.962 [INFO][4070] ipam_plugin.go 417: Releasing address using handleID ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.962 [INFO][4070] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.962 [INFO][4070] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.986 [WARNING][4070] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.987 [INFO][4070] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.989 [INFO][4070] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:55.994466 containerd[1464]: 2024-10-08 19:43:55.991 [INFO][4056] k8s.go 621: Teardown processing complete. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:43:55.995039 containerd[1464]: time="2024-10-08T19:43:55.994849895Z" level=info msg="TearDown network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\" successfully" Oct 8 19:43:55.995039 containerd[1464]: time="2024-10-08T19:43:55.994887576Z" level=info msg="StopPodSandbox for \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\" returns successfully" Oct 8 19:43:55.996324 containerd[1464]: time="2024-10-08T19:43:55.995909827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gf7n9,Uid:76af55ff-f2b4-42c0-ae6e-0c08786cce40,Namespace:kube-system,Attempt:1,}" Oct 8 19:43:55.998939 systemd[1]: run-netns-cni\x2d9497358c\x2d339a\x2dd13b\x2d8590\x2db8f9a7f004bd.mount: Deactivated successfully. Oct 8 19:43:56.162216 systemd-networkd[1373]: cali893895766a1: Link UP Oct 8 19:43:56.162772 systemd-networkd[1373]: cali893895766a1: Gained carrier Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.043 [INFO][4093] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.062 [INFO][4093] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0 coredns-7db6d8ff4d- kube-system 76af55ff-f2b4-42c0-ae6e-0c08786cce40 699 0 2024-10-08 19:43:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-5-28a2d443fc coredns-7db6d8ff4d-gf7n9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali893895766a1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.062 [INFO][4093] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.091 [INFO][4101] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" HandleID="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.110 [INFO][4101] ipam_plugin.go 270: Auto assigning IP ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" HandleID="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316530), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-5-28a2d443fc", "pod":"coredns-7db6d8ff4d-gf7n9", "timestamp":"2024-10-08 19:43:56.091811053 +0000 UTC"}, Hostname:"ci-3975-2-2-5-28a2d443fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.111 [INFO][4101] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.111 [INFO][4101] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.111 [INFO][4101] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-5-28a2d443fc' Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.113 [INFO][4101] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.119 [INFO][4101] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.127 [INFO][4101] ipam.go 489: Trying affinity for 192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.129 [INFO][4101] ipam.go 155: Attempting to load block cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.132 [INFO][4101] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.132 [INFO][4101] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.134 [INFO][4101] ipam.go 1685: Creating new handle: k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2 Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.140 [INFO][4101] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.147 [INFO][4101] ipam.go 1216: Successfully claimed IPs: [192.168.45.193/26] block=192.168.45.192/26 handle="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.147 [INFO][4101] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.193/26] handle="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.147 [INFO][4101] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:56.178921 containerd[1464]: 2024-10-08 19:43:56.147 [INFO][4101] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.45.193/26] IPv6=[] ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" HandleID="k8s-pod-network.2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:56.180894 containerd[1464]: 2024-10-08 19:43:56.150 [INFO][4093] k8s.go 386: Populated endpoint ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76af55ff-f2b4-42c0-ae6e-0c08786cce40", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"", Pod:"coredns-7db6d8ff4d-gf7n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali893895766a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:56.180894 containerd[1464]: 2024-10-08 19:43:56.150 [INFO][4093] k8s.go 387: Calico CNI using IPs: [192.168.45.193/32] ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:56.180894 containerd[1464]: 2024-10-08 19:43:56.150 [INFO][4093] dataplane_linux.go 68: Setting the host side veth name to cali893895766a1 ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:56.180894 containerd[1464]: 2024-10-08 19:43:56.158 [INFO][4093] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:56.180894 containerd[1464]: 2024-10-08 19:43:56.158 [INFO][4093] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76af55ff-f2b4-42c0-ae6e-0c08786cce40", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2", Pod:"coredns-7db6d8ff4d-gf7n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali893895766a1", MAC:"b2:c9:55:70:bb:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:56.180894 containerd[1464]: 2024-10-08 19:43:56.174 [INFO][4093] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gf7n9" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:43:56.210918 containerd[1464]: time="2024-10-08T19:43:56.210740615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:56.210918 containerd[1464]: time="2024-10-08T19:43:56.210804575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:56.210918 containerd[1464]: time="2024-10-08T19:43:56.210820855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:56.210918 containerd[1464]: time="2024-10-08T19:43:56.210839616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:56.241979 systemd[1]: Started cri-containerd-2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2.scope - libcontainer container 2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2. Oct 8 19:43:56.283254 containerd[1464]: time="2024-10-08T19:43:56.283205740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gf7n9,Uid:76af55ff-f2b4-42c0-ae6e-0c08786cce40,Namespace:kube-system,Attempt:1,} returns sandbox id \"2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2\"" Oct 8 19:43:56.289324 containerd[1464]: time="2024-10-08T19:43:56.289268047Z" level=info msg="CreateContainer within sandbox \"2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:43:56.290244 kubelet[2728]: I1008 19:43:56.289849 2728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:43:56.325809 containerd[1464]: time="2024-10-08T19:43:56.325753452Z" level=info msg="CreateContainer within sandbox \"2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"07daa1013c105f4fa31869a8cba88bd7ed5511cfbabb1eaba8cc75296f300987\"" Oct 8 19:43:56.327043 containerd[1464]: time="2024-10-08T19:43:56.327003946Z" level=info msg="StartContainer for \"07daa1013c105f4fa31869a8cba88bd7ed5511cfbabb1eaba8cc75296f300987\"" Oct 8 19:43:56.369659 systemd[1]: Started cri-containerd-07daa1013c105f4fa31869a8cba88bd7ed5511cfbabb1eaba8cc75296f300987.scope - libcontainer container 07daa1013c105f4fa31869a8cba88bd7ed5511cfbabb1eaba8cc75296f300987. Oct 8 19:43:56.409327 containerd[1464]: time="2024-10-08T19:43:56.409235420Z" level=info msg="StartContainer for \"07daa1013c105f4fa31869a8cba88bd7ed5511cfbabb1eaba8cc75296f300987\" returns successfully" Oct 8 19:43:56.801027 containerd[1464]: time="2024-10-08T19:43:56.799607556Z" level=info msg="StopPodSandbox for \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\"" Oct 8 19:43:56.801027 containerd[1464]: time="2024-10-08T19:43:56.799607636Z" level=info msg="StopPodSandbox for \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\"" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.878 [INFO][4230] k8s.go 608: Cleaning up netns ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.878 [INFO][4230] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" iface="eth0" netns="/var/run/netns/cni-a046d845-2a1b-59a0-7123-2b64043a58bd" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.879 [INFO][4230] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" iface="eth0" netns="/var/run/netns/cni-a046d845-2a1b-59a0-7123-2b64043a58bd" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.879 [INFO][4230] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" iface="eth0" netns="/var/run/netns/cni-a046d845-2a1b-59a0-7123-2b64043a58bd" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.879 [INFO][4230] k8s.go 615: Releasing IP address(es) ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.879 [INFO][4230] utils.go 188: Calico CNI releasing IP address ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.927 [INFO][4248] ipam_plugin.go 417: Releasing address using handleID ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.932 [INFO][4248] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.932 [INFO][4248] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.957 [WARNING][4248] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.958 [INFO][4248] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.961 [INFO][4248] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:56.975289 containerd[1464]: 2024-10-08 19:43:56.968 [INFO][4230] k8s.go 621: Teardown processing complete. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:43:56.975799 containerd[1464]: time="2024-10-08T19:43:56.975465590Z" level=info msg="TearDown network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\" successfully" Oct 8 19:43:56.975799 containerd[1464]: time="2024-10-08T19:43:56.975505350Z" level=info msg="StopPodSandbox for \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\" returns successfully" Oct 8 19:43:56.978048 containerd[1464]: time="2024-10-08T19:43:56.977967058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f6669d9f-lvf2t,Uid:69f1a87a-d603-47b1-9d39-88a882952364,Namespace:calico-system,Attempt:1,}" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.911 [INFO][4240] k8s.go 608: Cleaning up netns ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.911 [INFO][4240] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" iface="eth0" netns="/var/run/netns/cni-b959da33-3b5e-ffa5-9f80-7f901429f450" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.911 [INFO][4240] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" iface="eth0" netns="/var/run/netns/cni-b959da33-3b5e-ffa5-9f80-7f901429f450" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.911 [INFO][4240] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" iface="eth0" netns="/var/run/netns/cni-b959da33-3b5e-ffa5-9f80-7f901429f450" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.911 [INFO][4240] k8s.go 615: Releasing IP address(es) ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.911 [INFO][4240] utils.go 188: Calico CNI releasing IP address ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.963 [INFO][4253] ipam_plugin.go 417: Releasing address using handleID ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.963 [INFO][4253] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.963 [INFO][4253] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.981 [WARNING][4253] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.981 [INFO][4253] ipam_plugin.go 445: Releasing address using workloadID ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.986 [INFO][4253] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:56.994505 containerd[1464]: 2024-10-08 19:43:56.988 [INFO][4240] k8s.go 621: Teardown processing complete. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:43:56.995288 containerd[1464]: time="2024-10-08T19:43:56.994986247Z" level=info msg="TearDown network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\" successfully" Oct 8 19:43:56.995288 containerd[1464]: time="2024-10-08T19:43:56.995023967Z" level=info msg="StopPodSandbox for \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\" returns successfully" Oct 8 19:43:56.999202 systemd[1]: run-netns-cni\x2db959da33\x2d3b5e\x2dffa5\x2d9f80\x2d7f901429f450.mount: Deactivated successfully. Oct 8 19:43:57.002091 containerd[1464]: time="2024-10-08T19:43:56.999470376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgxnr,Uid:8892696d-63f0-457d-bddb-381ce2acd1db,Namespace:kube-system,Attempt:1,}" Oct 8 19:43:57.000506 systemd[1]: run-netns-cni\x2da046d845\x2d2a1b\x2d59a0\x2d7123\x2d2b64043a58bd.mount: Deactivated successfully. Oct 8 19:43:57.061520 kubelet[2728]: I1008 19:43:57.060535 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gf7n9" podStartSLOduration=30.060504884 podStartE2EDuration="30.060504884s" podCreationTimestamp="2024-10-08 19:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:57.056866444 +0000 UTC m=+44.390338988" watchObservedRunningTime="2024-10-08 19:43:57.060504884 +0000 UTC m=+44.393977388" Oct 8 19:43:57.099440 kernel: bpftool[4286]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:43:57.290732 systemd-networkd[1373]: calia6eb66b04e1: Link UP Oct 8 19:43:57.293208 systemd-networkd[1373]: calia6eb66b04e1: Gained carrier Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.135 [INFO][4291] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0 coredns-7db6d8ff4d- kube-system 8892696d-63f0-457d-bddb-381ce2acd1db 716 0 2024-10-08 19:43:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-5-28a2d443fc coredns-7db6d8ff4d-lgxnr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6eb66b04e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.135 [INFO][4291] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.198 [INFO][4315] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" HandleID="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.216 [INFO][4315] ipam_plugin.go 270: Auto assigning IP ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" HandleID="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003768b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-5-28a2d443fc", "pod":"coredns-7db6d8ff4d-lgxnr", "timestamp":"2024-10-08 19:43:57.196068286 +0000 UTC"}, Hostname:"ci-3975-2-2-5-28a2d443fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.216 [INFO][4315] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.217 [INFO][4315] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.217 [INFO][4315] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-5-28a2d443fc' Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.221 [INFO][4315] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.235 [INFO][4315] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.250 [INFO][4315] ipam.go 489: Trying affinity for 192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.253 [INFO][4315] ipam.go 155: Attempting to load block cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.258 [INFO][4315] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.259 [INFO][4315] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.261 [INFO][4315] ipam.go 1685: Creating new handle: k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.267 [INFO][4315] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.273 [INFO][4315] ipam.go 1216: Successfully claimed IPs: [192.168.45.194/26] block=192.168.45.192/26 handle="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.273 [INFO][4315] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.194/26] handle="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.273 [INFO][4315] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:57.325767 containerd[1464]: 2024-10-08 19:43:57.273 [INFO][4315] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.45.194/26] IPv6=[] ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" HandleID="k8s-pod-network.fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:57.326591 containerd[1464]: 2024-10-08 19:43:57.276 [INFO][4291] k8s.go 386: Populated endpoint ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8892696d-63f0-457d-bddb-381ce2acd1db", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"", Pod:"coredns-7db6d8ff4d-lgxnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6eb66b04e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:57.326591 containerd[1464]: 2024-10-08 19:43:57.277 [INFO][4291] k8s.go 387: Calico CNI using IPs: [192.168.45.194/32] ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:57.326591 containerd[1464]: 2024-10-08 19:43:57.277 [INFO][4291] dataplane_linux.go 68: Setting the host side veth name to calia6eb66b04e1 ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:57.326591 containerd[1464]: 2024-10-08 19:43:57.294 [INFO][4291] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:57.326591 containerd[1464]: 2024-10-08 19:43:57.295 [INFO][4291] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8892696d-63f0-457d-bddb-381ce2acd1db", ResourceVersion:"716", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e", Pod:"coredns-7db6d8ff4d-lgxnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6eb66b04e1", MAC:"da:2c:f7:62:c4:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:57.326591 containerd[1464]: 2024-10-08 19:43:57.322 [INFO][4291] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lgxnr" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:43:57.348935 systemd-networkd[1373]: caliacf27add7cf: Link UP Oct 8 19:43:57.352543 systemd-networkd[1373]: caliacf27add7cf: Gained carrier Oct 8 19:43:57.373699 containerd[1464]: time="2024-10-08T19:43:57.373583307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:57.373699 containerd[1464]: time="2024-10-08T19:43:57.373647548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:57.373699 containerd[1464]: time="2024-10-08T19:43:57.373662188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:57.373991 containerd[1464]: time="2024-10-08T19:43:57.373672028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.125 [INFO][4269] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0 calico-kube-controllers-54f6669d9f- calico-system 69f1a87a-d603-47b1-9d39-88a882952364 715 0 2024-10-08 19:43:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54f6669d9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975-2-2-5-28a2d443fc calico-kube-controllers-54f6669d9f-lvf2t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliacf27add7cf [] []}} ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.125 [INFO][4269] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.195 [INFO][4311] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" HandleID="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.216 [INFO][4311] ipam_plugin.go 270: Auto assigning IP ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" HandleID="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000102a60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-5-28a2d443fc", "pod":"calico-kube-controllers-54f6669d9f-lvf2t", "timestamp":"2024-10-08 19:43:57.195057075 +0000 UTC"}, Hostname:"ci-3975-2-2-5-28a2d443fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.217 [INFO][4311] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.273 [INFO][4311] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.273 [INFO][4311] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-5-28a2d443fc' Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.276 [INFO][4311] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.288 [INFO][4311] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.296 [INFO][4311] ipam.go 489: Trying affinity for 192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.300 [INFO][4311] ipam.go 155: Attempting to load block cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.305 [INFO][4311] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.306 [INFO][4311] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.311 [INFO][4311] ipam.go 1685: Creating new handle: k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444 Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.326 [INFO][4311] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.338 [INFO][4311] ipam.go 1216: Successfully claimed IPs: [192.168.45.195/26] block=192.168.45.192/26 handle="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.338 [INFO][4311] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.195/26] handle="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.338 [INFO][4311] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:57.393457 containerd[1464]: 2024-10-08 19:43:57.339 [INFO][4311] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.45.195/26] IPv6=[] ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" HandleID="k8s-pod-network.c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:57.394031 containerd[1464]: 2024-10-08 19:43:57.343 [INFO][4269] k8s.go 386: Populated endpoint ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0", GenerateName:"calico-kube-controllers-54f6669d9f-", Namespace:"calico-system", SelfLink:"", UID:"69f1a87a-d603-47b1-9d39-88a882952364", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f6669d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"", Pod:"calico-kube-controllers-54f6669d9f-lvf2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacf27add7cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:57.394031 containerd[1464]: 2024-10-08 19:43:57.343 [INFO][4269] k8s.go 387: Calico CNI using IPs: [192.168.45.195/32] ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:57.394031 containerd[1464]: 2024-10-08 19:43:57.343 [INFO][4269] dataplane_linux.go 68: Setting the host side veth name to caliacf27add7cf ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:57.394031 containerd[1464]: 2024-10-08 19:43:57.354 [INFO][4269] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:57.394031 containerd[1464]: 2024-10-08 19:43:57.356 [INFO][4269] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0", GenerateName:"calico-kube-controllers-54f6669d9f-", Namespace:"calico-system", SelfLink:"", UID:"69f1a87a-d603-47b1-9d39-88a882952364", ResourceVersion:"715", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f6669d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444", Pod:"calico-kube-controllers-54f6669d9f-lvf2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacf27add7cf", MAC:"5e:7c:a9:ff:ec:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:57.394031 containerd[1464]: 2024-10-08 19:43:57.389 [INFO][4269] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444" Namespace="calico-system" Pod="calico-kube-controllers-54f6669d9f-lvf2t" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:43:57.416577 systemd[1]: Started cri-containerd-fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e.scope - libcontainer container fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e. Oct 8 19:43:57.444853 containerd[1464]: time="2024-10-08T19:43:57.444475082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:57.446577 containerd[1464]: time="2024-10-08T19:43:57.446499424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:57.446919 containerd[1464]: time="2024-10-08T19:43:57.446727627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:57.446919 containerd[1464]: time="2024-10-08T19:43:57.446816948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:57.496911 containerd[1464]: time="2024-10-08T19:43:57.496654933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lgxnr,Uid:8892696d-63f0-457d-bddb-381ce2acd1db,Namespace:kube-system,Attempt:1,} returns sandbox id \"fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e\"" Oct 8 19:43:57.498585 systemd[1]: Started cri-containerd-c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444.scope - libcontainer container c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444. Oct 8 19:43:57.542460 containerd[1464]: time="2024-10-08T19:43:57.541721106Z" level=info msg="CreateContainer within sandbox \"fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:43:57.565662 containerd[1464]: time="2024-10-08T19:43:57.565615967Z" level=info msg="CreateContainer within sandbox \"fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26da8e2f00b79ec798fda8b15d40f7886a8cf2ccd2e38e2b45133a1f97388964\"" Oct 8 19:43:57.567237 containerd[1464]: time="2024-10-08T19:43:57.566242214Z" level=info msg="StartContainer for \"26da8e2f00b79ec798fda8b15d40f7886a8cf2ccd2e38e2b45133a1f97388964\"" Oct 8 19:43:57.623820 systemd[1]: Started cri-containerd-26da8e2f00b79ec798fda8b15d40f7886a8cf2ccd2e38e2b45133a1f97388964.scope - libcontainer container 26da8e2f00b79ec798fda8b15d40f7886a8cf2ccd2e38e2b45133a1f97388964. Oct 8 19:43:57.649890 containerd[1464]: time="2024-10-08T19:43:57.649760927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54f6669d9f-lvf2t,Uid:69f1a87a-d603-47b1-9d39-88a882952364,Namespace:calico-system,Attempt:1,} returns sandbox id \"c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444\"" Oct 8 19:43:57.653975 containerd[1464]: time="2024-10-08T19:43:57.653669330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:43:57.686457 containerd[1464]: time="2024-10-08T19:43:57.686415848Z" level=info msg="StartContainer for \"26da8e2f00b79ec798fda8b15d40f7886a8cf2ccd2e38e2b45133a1f97388964\" returns successfully" Oct 8 19:43:57.826448 systemd-networkd[1373]: vxlan.calico: Link UP Oct 8 19:43:57.826616 systemd-networkd[1373]: vxlan.calico: Gained carrier Oct 8 19:43:58.065225 kubelet[2728]: I1008 19:43:58.064667 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lgxnr" podStartSLOduration=31.064639012 podStartE2EDuration="31.064639012s" podCreationTimestamp="2024-10-08 19:43:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:43:58.062498109 +0000 UTC m=+45.395970653" watchObservedRunningTime="2024-10-08 19:43:58.064639012 +0000 UTC m=+45.398111516" Oct 8 19:43:58.192431 systemd-networkd[1373]: cali893895766a1: Gained IPv6LL Oct 8 19:43:58.512228 systemd-networkd[1373]: calia6eb66b04e1: Gained IPv6LL Oct 8 19:43:58.798037 containerd[1464]: time="2024-10-08T19:43:58.797890505Z" level=info msg="StopPodSandbox for \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\"" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.862 [INFO][4571] k8s.go 608: Cleaning up netns ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.862 [INFO][4571] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" iface="eth0" netns="/var/run/netns/cni-b69043b2-6f52-e3b8-5e94-e0c417a648cb" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.863 [INFO][4571] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" iface="eth0" netns="/var/run/netns/cni-b69043b2-6f52-e3b8-5e94-e0c417a648cb" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.866 [INFO][4571] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" iface="eth0" netns="/var/run/netns/cni-b69043b2-6f52-e3b8-5e94-e0c417a648cb" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.866 [INFO][4571] k8s.go 615: Releasing IP address(es) ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.866 [INFO][4571] utils.go 188: Calico CNI releasing IP address ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.892 [INFO][4577] ipam_plugin.go 417: Releasing address using handleID ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.892 [INFO][4577] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.892 [INFO][4577] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.903 [WARNING][4577] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.903 [INFO][4577] ipam_plugin.go 445: Releasing address using workloadID ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.906 [INFO][4577] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:58.912239 containerd[1464]: 2024-10-08 19:43:58.909 [INFO][4571] k8s.go 621: Teardown processing complete. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:43:58.917436 containerd[1464]: time="2024-10-08T19:43:58.915353170Z" level=info msg="TearDown network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\" successfully" Oct 8 19:43:58.917436 containerd[1464]: time="2024-10-08T19:43:58.915650733Z" level=info msg="StopPodSandbox for \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\" returns successfully" Oct 8 19:43:58.917316 systemd[1]: run-netns-cni\x2db69043b2\x2d6f52\x2de3b8\x2d5e94\x2de0c417a648cb.mount: Deactivated successfully. Oct 8 19:43:58.919557 containerd[1464]: time="2024-10-08T19:43:58.918819767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsxhd,Uid:ad896521-c957-4f2e-a42a-e7387295bb9d,Namespace:calico-system,Attempt:1,}" Oct 8 19:43:58.961761 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Oct 8 19:43:59.023551 systemd-networkd[1373]: caliacf27add7cf: Gained IPv6LL Oct 8 19:43:59.152578 systemd-networkd[1373]: cali04708b20872: Link UP Oct 8 19:43:59.155442 systemd-networkd[1373]: cali04708b20872: Gained carrier Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.035 [INFO][4583] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0 csi-node-driver- calico-system ad896521-c957-4f2e-a42a-e7387295bb9d 747 0 2024-10-08 19:43:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975-2-2-5-28a2d443fc csi-node-driver-rsxhd eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali04708b20872 [] []}} ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.036 [INFO][4583] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.070 [INFO][4594] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" HandleID="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.088 [INFO][4594] ipam_plugin.go 270: Auto assigning IP ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" HandleID="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-5-28a2d443fc", "pod":"csi-node-driver-rsxhd", "timestamp":"2024-10-08 19:43:59.070048624 +0000 UTC"}, Hostname:"ci-3975-2-2-5-28a2d443fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.088 [INFO][4594] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.088 [INFO][4594] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.088 [INFO][4594] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-5-28a2d443fc' Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.091 [INFO][4594] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.097 [INFO][4594] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.106 [INFO][4594] ipam.go 489: Trying affinity for 192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.108 [INFO][4594] ipam.go 155: Attempting to load block cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.114 [INFO][4594] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.114 [INFO][4594] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.117 [INFO][4594] ipam.go 1685: Creating new handle: k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.126 [INFO][4594] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.144 [INFO][4594] ipam.go 1216: Successfully claimed IPs: [192.168.45.196/26] block=192.168.45.192/26 handle="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.145 [INFO][4594] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.196/26] handle="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.146 [INFO][4594] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:43:59.183462 containerd[1464]: 2024-10-08 19:43:59.146 [INFO][4594] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.45.196/26] IPv6=[] ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" HandleID="k8s-pod-network.e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:59.184082 containerd[1464]: 2024-10-08 19:43:59.148 [INFO][4583] k8s.go 386: Populated endpoint ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad896521-c957-4f2e-a42a-e7387295bb9d", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"", Pod:"csi-node-driver-rsxhd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali04708b20872", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:59.184082 containerd[1464]: 2024-10-08 19:43:59.150 [INFO][4583] k8s.go 387: Calico CNI using IPs: [192.168.45.196/32] ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:59.184082 containerd[1464]: 2024-10-08 19:43:59.150 [INFO][4583] dataplane_linux.go 68: Setting the host side veth name to cali04708b20872 ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:59.184082 containerd[1464]: 2024-10-08 19:43:59.155 [INFO][4583] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:59.184082 containerd[1464]: 2024-10-08 19:43:59.157 [INFO][4583] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad896521-c957-4f2e-a42a-e7387295bb9d", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e", Pod:"csi-node-driver-rsxhd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali04708b20872", MAC:"d2:5a:2b:53:88:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:43:59.184082 containerd[1464]: 2024-10-08 19:43:59.174 [INFO][4583] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e" Namespace="calico-system" Pod="csi-node-driver-rsxhd" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:43:59.207805 containerd[1464]: time="2024-10-08T19:43:59.207143437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:43:59.207805 containerd[1464]: time="2024-10-08T19:43:59.207404160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:59.208592 containerd[1464]: time="2024-10-08T19:43:59.208542212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:43:59.208681 containerd[1464]: time="2024-10-08T19:43:59.208573172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:43:59.235596 systemd[1]: Started cri-containerd-e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e.scope - libcontainer container e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e. Oct 8 19:43:59.264593 containerd[1464]: time="2024-10-08T19:43:59.264319363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsxhd,Uid:ad896521-c957-4f2e-a42a-e7387295bb9d,Namespace:calico-system,Attempt:1,} returns sandbox id \"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e\"" Oct 8 19:44:00.239794 systemd-networkd[1373]: cali04708b20872: Gained IPv6LL Oct 8 19:44:01.666090 containerd[1464]: time="2024-10-08T19:44:01.665277207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:01.666090 containerd[1464]: time="2024-10-08T19:44:01.666045095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:44:01.667617 containerd[1464]: time="2024-10-08T19:44:01.667548990Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:01.671407 containerd[1464]: time="2024-10-08T19:44:01.671078986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:01.672221 containerd[1464]: time="2024-10-08T19:44:01.671923515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 4.018215025s" Oct 8 19:44:01.672221 containerd[1464]: time="2024-10-08T19:44:01.671965316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:44:01.674577 containerd[1464]: time="2024-10-08T19:44:01.674503582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:44:01.695602 containerd[1464]: time="2024-10-08T19:44:01.695538118Z" level=info msg="CreateContainer within sandbox \"c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:44:01.717766 containerd[1464]: time="2024-10-08T19:44:01.717678666Z" level=info msg="CreateContainer within sandbox \"c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454\"" Oct 8 19:44:01.720414 containerd[1464]: time="2024-10-08T19:44:01.718545795Z" level=info msg="StartContainer for \"ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454\"" Oct 8 19:44:01.763619 systemd[1]: Started cri-containerd-ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454.scope - libcontainer container ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454. Oct 8 19:44:01.812622 containerd[1464]: time="2024-10-08T19:44:01.812573442Z" level=info msg="StartContainer for \"ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454\" returns successfully" Oct 8 19:44:02.088404 kubelet[2728]: I1008 19:44:02.087270 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54f6669d9f-lvf2t" podStartSLOduration=23.066354882 podStartE2EDuration="27.087251335s" podCreationTimestamp="2024-10-08 19:43:35 +0000 UTC" firstStartedPulling="2024-10-08 19:43:57.652887041 +0000 UTC m=+44.986359585" lastFinishedPulling="2024-10-08 19:44:01.673783454 +0000 UTC m=+49.007256038" observedRunningTime="2024-10-08 19:44:02.086329045 +0000 UTC m=+49.419801589" watchObservedRunningTime="2024-10-08 19:44:02.087251335 +0000 UTC m=+49.420723879" Oct 8 19:44:02.682819 systemd[1]: run-containerd-runc-k8s.io-ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454-runc.fAVQkY.mount: Deactivated successfully. Oct 8 19:44:03.261382 containerd[1464]: time="2024-10-08T19:44:03.261274639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:03.263303 containerd[1464]: time="2024-10-08T19:44:03.263252739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:44:03.264854 containerd[1464]: time="2024-10-08T19:44:03.264692553Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:03.270221 containerd[1464]: time="2024-10-08T19:44:03.269614163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:03.271037 containerd[1464]: time="2024-10-08T19:44:03.270976456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.596420554s" Oct 8 19:44:03.271037 containerd[1464]: time="2024-10-08T19:44:03.271031577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:44:03.275103 containerd[1464]: time="2024-10-08T19:44:03.274772494Z" level=info msg="CreateContainer within sandbox \"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:44:03.297016 containerd[1464]: time="2024-10-08T19:44:03.296945836Z" level=info msg="CreateContainer within sandbox \"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cecd9b3635938ff376f78eda7a6ab1d2ec588126becc13ff673ab8d4bdbd3d4c\"" Oct 8 19:44:03.299304 containerd[1464]: time="2024-10-08T19:44:03.298544652Z" level=info msg="StartContainer for \"cecd9b3635938ff376f78eda7a6ab1d2ec588126becc13ff673ab8d4bdbd3d4c\"" Oct 8 19:44:03.347625 systemd[1]: Started cri-containerd-cecd9b3635938ff376f78eda7a6ab1d2ec588126becc13ff673ab8d4bdbd3d4c.scope - libcontainer container cecd9b3635938ff376f78eda7a6ab1d2ec588126becc13ff673ab8d4bdbd3d4c. Oct 8 19:44:03.387241 containerd[1464]: time="2024-10-08T19:44:03.387178377Z" level=info msg="StartContainer for \"cecd9b3635938ff376f78eda7a6ab1d2ec588126becc13ff673ab8d4bdbd3d4c\" returns successfully" Oct 8 19:44:03.390601 containerd[1464]: time="2024-10-08T19:44:03.389107197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:44:04.835969 containerd[1464]: time="2024-10-08T19:44:04.835869338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:04.837584 containerd[1464]: time="2024-10-08T19:44:04.837248231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:44:04.838916 containerd[1464]: time="2024-10-08T19:44:04.838549764Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:04.842111 containerd[1464]: time="2024-10-08T19:44:04.842035838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:04.843151 containerd[1464]: time="2024-10-08T19:44:04.843081689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.453935691s" Oct 8 19:44:04.843456 containerd[1464]: time="2024-10-08T19:44:04.843299731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:44:04.847855 containerd[1464]: time="2024-10-08T19:44:04.847807935Z" level=info msg="CreateContainer within sandbox \"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:44:04.872982 containerd[1464]: time="2024-10-08T19:44:04.872924583Z" level=info msg="CreateContainer within sandbox \"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4f9b96cb57da468887c9b10806270969e2fbb058f5c1c92872564cb0cfcf7f35\"" Oct 8 19:44:04.875673 containerd[1464]: time="2024-10-08T19:44:04.874613039Z" level=info msg="StartContainer for \"4f9b96cb57da468887c9b10806270969e2fbb058f5c1c92872564cb0cfcf7f35\"" Oct 8 19:44:04.915601 systemd[1]: Started cri-containerd-4f9b96cb57da468887c9b10806270969e2fbb058f5c1c92872564cb0cfcf7f35.scope - libcontainer container 4f9b96cb57da468887c9b10806270969e2fbb058f5c1c92872564cb0cfcf7f35. Oct 8 19:44:04.950289 containerd[1464]: time="2024-10-08T19:44:04.950188984Z" level=info msg="StartContainer for \"4f9b96cb57da468887c9b10806270969e2fbb058f5c1c92872564cb0cfcf7f35\" returns successfully" Oct 8 19:44:05.928295 kubelet[2728]: I1008 19:44:05.927914 2728 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:44:05.928295 kubelet[2728]: I1008 19:44:05.927950 2728 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:44:12.720384 kubelet[2728]: I1008 19:44:12.718390 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rsxhd" podStartSLOduration=32.139974694 podStartE2EDuration="37.718353335s" podCreationTimestamp="2024-10-08 19:43:35 +0000 UTC" firstStartedPulling="2024-10-08 19:43:59.266347704 +0000 UTC m=+46.599820208" lastFinishedPulling="2024-10-08 19:44:04.844726305 +0000 UTC m=+52.178198849" observedRunningTime="2024-10-08 19:44:05.09617533 +0000 UTC m=+52.429647874" watchObservedRunningTime="2024-10-08 19:44:12.718353335 +0000 UTC m=+60.051825879" Oct 8 19:44:12.773424 containerd[1464]: time="2024-10-08T19:44:12.773165221Z" level=info msg="StopPodSandbox for \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\"" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.820 [WARNING][4861] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad896521-c957-4f2e-a42a-e7387295bb9d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e", Pod:"csi-node-driver-rsxhd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali04708b20872", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.820 [INFO][4861] k8s.go 608: Cleaning up netns ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.820 [INFO][4861] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" iface="eth0" netns="" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.820 [INFO][4861] k8s.go 615: Releasing IP address(es) ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.820 [INFO][4861] utils.go 188: Calico CNI releasing IP address ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.847 [INFO][4869] ipam_plugin.go 417: Releasing address using handleID ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.847 [INFO][4869] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.847 [INFO][4869] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.858 [WARNING][4869] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.858 [INFO][4869] ipam_plugin.go 445: Releasing address using workloadID ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.860 [INFO][4869] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:12.866816 containerd[1464]: 2024-10-08 19:44:12.863 [INFO][4861] k8s.go 621: Teardown processing complete. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.869048 containerd[1464]: time="2024-10-08T19:44:12.866854893Z" level=info msg="TearDown network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\" successfully" Oct 8 19:44:12.869048 containerd[1464]: time="2024-10-08T19:44:12.866886174Z" level=info msg="StopPodSandbox for \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\" returns successfully" Oct 8 19:44:12.869048 containerd[1464]: time="2024-10-08T19:44:12.868058144Z" level=info msg="RemovePodSandbox for \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\"" Oct 8 19:44:12.874023 containerd[1464]: time="2024-10-08T19:44:12.868096904Z" level=info msg="Forcibly stopping sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\"" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.927 [WARNING][4889] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad896521-c957-4f2e-a42a-e7387295bb9d", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"e6a675fe3ddb829d0923bc89ce7c9b52b129c6157d2849a2cafd3f98f3f7740e", Pod:"csi-node-driver-rsxhd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.45.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali04708b20872", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.928 [INFO][4889] k8s.go 608: Cleaning up netns ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.928 [INFO][4889] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" iface="eth0" netns="" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.928 [INFO][4889] k8s.go 615: Releasing IP address(es) ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.928 [INFO][4889] utils.go 188: Calico CNI releasing IP address ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.953 [INFO][4895] ipam_plugin.go 417: Releasing address using handleID ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.953 [INFO][4895] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.953 [INFO][4895] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.971 [WARNING][4895] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.972 [INFO][4895] ipam_plugin.go 445: Releasing address using workloadID ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" HandleID="k8s-pod-network.db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Workload="ci--3975--2--2--5--28a2d443fc-k8s-csi--node--driver--rsxhd-eth0" Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.978 [INFO][4895] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:12.982345 containerd[1464]: 2024-10-08 19:44:12.980 [INFO][4889] k8s.go 621: Teardown processing complete. ContainerID="db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5" Oct 8 19:44:12.982345 containerd[1464]: time="2024-10-08T19:44:12.982315398Z" level=info msg="TearDown network for sandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\" successfully" Oct 8 19:44:12.987211 containerd[1464]: time="2024-10-08T19:44:12.987139961Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:12.987351 containerd[1464]: time="2024-10-08T19:44:12.987240002Z" level=info msg="RemovePodSandbox \"db1c5e80edbf5097e4716caccf32fd6830a0f376009a2af2007b5b1f6270dbd5\" returns successfully" Oct 8 19:44:12.987970 containerd[1464]: time="2024-10-08T19:44:12.987927248Z" level=info msg="StopPodSandbox for \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\"" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.031 [WARNING][4913] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0", GenerateName:"calico-kube-controllers-54f6669d9f-", Namespace:"calico-system", SelfLink:"", UID:"69f1a87a-d603-47b1-9d39-88a882952364", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f6669d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444", Pod:"calico-kube-controllers-54f6669d9f-lvf2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacf27add7cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.031 [INFO][4913] k8s.go 608: Cleaning up netns ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.031 [INFO][4913] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" iface="eth0" netns="" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.031 [INFO][4913] k8s.go 615: Releasing IP address(es) ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.031 [INFO][4913] utils.go 188: Calico CNI releasing IP address ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.067 [INFO][4919] ipam_plugin.go 417: Releasing address using handleID ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.067 [INFO][4919] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.068 [INFO][4919] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.081 [WARNING][4919] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.081 [INFO][4919] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.084 [INFO][4919] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:13.089780 containerd[1464]: 2024-10-08 19:44:13.087 [INFO][4913] k8s.go 621: Teardown processing complete. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.090281 containerd[1464]: time="2024-10-08T19:44:13.090240107Z" level=info msg="TearDown network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\" successfully" Oct 8 19:44:13.090318 containerd[1464]: time="2024-10-08T19:44:13.090279788Z" level=info msg="StopPodSandbox for \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\" returns successfully" Oct 8 19:44:13.091098 containerd[1464]: time="2024-10-08T19:44:13.090729552Z" level=info msg="RemovePodSandbox for \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\"" Oct 8 19:44:13.091098 containerd[1464]: time="2024-10-08T19:44:13.090760432Z" level=info msg="Forcibly stopping sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\"" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.143 [WARNING][4940] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0", GenerateName:"calico-kube-controllers-54f6669d9f-", Namespace:"calico-system", SelfLink:"", UID:"69f1a87a-d603-47b1-9d39-88a882952364", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54f6669d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"c16ae221cddee549d8b86c7e9b5b502dd2a1cda4599c2155dcac19835a81a444", Pod:"calico-kube-controllers-54f6669d9f-lvf2t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.45.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacf27add7cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.143 [INFO][4940] k8s.go 608: Cleaning up netns ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.143 [INFO][4940] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" iface="eth0" netns="" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.143 [INFO][4940] k8s.go 615: Releasing IP address(es) ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.143 [INFO][4940] utils.go 188: Calico CNI releasing IP address ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.170 [INFO][4946] ipam_plugin.go 417: Releasing address using handleID ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.170 [INFO][4946] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.171 [INFO][4946] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.181 [WARNING][4946] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.181 [INFO][4946] ipam_plugin.go 445: Releasing address using workloadID ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" HandleID="k8s-pod-network.bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--kube--controllers--54f6669d9f--lvf2t-eth0" Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.184 [INFO][4946] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:13.186956 containerd[1464]: 2024-10-08 19:44:13.185 [INFO][4940] k8s.go 621: Teardown processing complete. ContainerID="bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353" Oct 8 19:44:13.187549 containerd[1464]: time="2024-10-08T19:44:13.187003236Z" level=info msg="TearDown network for sandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\" successfully" Oct 8 19:44:13.196432 containerd[1464]: time="2024-10-08T19:44:13.196376959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:13.196580 containerd[1464]: time="2024-10-08T19:44:13.196467239Z" level=info msg="RemovePodSandbox \"bbd5cbf0b21ed75f77a3956467d7b063d98644c10a2de5ce6e9bbbfc85690353\" returns successfully" Oct 8 19:44:13.197516 containerd[1464]: time="2024-10-08T19:44:13.197090805Z" level=info msg="StopPodSandbox for \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\"" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.232 [WARNING][4965] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8892696d-63f0-457d-bddb-381ce2acd1db", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e", Pod:"coredns-7db6d8ff4d-lgxnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6eb66b04e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.233 [INFO][4965] k8s.go 608: Cleaning up netns ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.233 [INFO][4965] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" iface="eth0" netns="" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.233 [INFO][4965] k8s.go 615: Releasing IP address(es) ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.233 [INFO][4965] utils.go 188: Calico CNI releasing IP address ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.256 [INFO][4971] ipam_plugin.go 417: Releasing address using handleID ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.256 [INFO][4971] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.256 [INFO][4971] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.266 [WARNING][4971] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.266 [INFO][4971] ipam_plugin.go 445: Releasing address using workloadID ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.268 [INFO][4971] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:13.271436 containerd[1464]: 2024-10-08 19:44:13.269 [INFO][4965] k8s.go 621: Teardown processing complete. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.272960 containerd[1464]: time="2024-10-08T19:44:13.272522387Z" level=info msg="TearDown network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\" successfully" Oct 8 19:44:13.272960 containerd[1464]: time="2024-10-08T19:44:13.272606867Z" level=info msg="StopPodSandbox for \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\" returns successfully" Oct 8 19:44:13.273927 containerd[1464]: time="2024-10-08T19:44:13.273522355Z" level=info msg="RemovePodSandbox for \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\"" Oct 8 19:44:13.273927 containerd[1464]: time="2024-10-08T19:44:13.273601516Z" level=info msg="Forcibly stopping sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\"" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.317 [WARNING][4990] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"8892696d-63f0-457d-bddb-381ce2acd1db", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"fb429b76e898941cb27c1243b0f3cfe836d39ab8d9239ae38f84cae4b96b963e", Pod:"coredns-7db6d8ff4d-lgxnr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6eb66b04e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.317 [INFO][4990] k8s.go 608: Cleaning up netns ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.317 [INFO][4990] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" iface="eth0" netns="" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.317 [INFO][4990] k8s.go 615: Releasing IP address(es) ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.317 [INFO][4990] utils.go 188: Calico CNI releasing IP address ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.338 [INFO][4996] ipam_plugin.go 417: Releasing address using handleID ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.338 [INFO][4996] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.338 [INFO][4996] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.350 [WARNING][4996] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.350 [INFO][4996] ipam_plugin.go 445: Releasing address using workloadID ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" HandleID="k8s-pod-network.af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--lgxnr-eth0" Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.353 [INFO][4996] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:13.355894 containerd[1464]: 2024-10-08 19:44:13.354 [INFO][4990] k8s.go 621: Teardown processing complete. ContainerID="af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697" Oct 8 19:44:13.357344 containerd[1464]: time="2024-10-08T19:44:13.356031879Z" level=info msg="TearDown network for sandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\" successfully" Oct 8 19:44:13.360278 containerd[1464]: time="2024-10-08T19:44:13.360233396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:13.360563 containerd[1464]: time="2024-10-08T19:44:13.360543559Z" level=info msg="RemovePodSandbox \"af5dd4aa69ad83dd96661c9359ebe303eb22a0c02f700e5c1441707114768697\" returns successfully" Oct 8 19:44:13.361431 containerd[1464]: time="2024-10-08T19:44:13.361146124Z" level=info msg="StopPodSandbox for \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\"" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.401 [WARNING][5015] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76af55ff-f2b4-42c0-ae6e-0c08786cce40", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2", Pod:"coredns-7db6d8ff4d-gf7n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali893895766a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.401 [INFO][5015] k8s.go 608: Cleaning up netns ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.401 [INFO][5015] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" iface="eth0" netns="" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.401 [INFO][5015] k8s.go 615: Releasing IP address(es) ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.401 [INFO][5015] utils.go 188: Calico CNI releasing IP address ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.428 [INFO][5021] ipam_plugin.go 417: Releasing address using handleID ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.429 [INFO][5021] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.429 [INFO][5021] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.440 [WARNING][5021] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.440 [INFO][5021] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.442 [INFO][5021] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:13.446308 containerd[1464]: 2024-10-08 19:44:13.444 [INFO][5015] k8s.go 621: Teardown processing complete. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.446308 containerd[1464]: time="2024-10-08T19:44:13.446070949Z" level=info msg="TearDown network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\" successfully" Oct 8 19:44:13.446308 containerd[1464]: time="2024-10-08T19:44:13.446100310Z" level=info msg="StopPodSandbox for \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\" returns successfully" Oct 8 19:44:13.447033 containerd[1464]: time="2024-10-08T19:44:13.446856596Z" level=info msg="RemovePodSandbox for \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\"" Oct 8 19:44:13.447033 containerd[1464]: time="2024-10-08T19:44:13.446889997Z" level=info msg="Forcibly stopping sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\"" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.489 [WARNING][5038] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"76af55ff-f2b4-42c0-ae6e-0c08786cce40", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 43, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"2ac15e8fa3def6f1a541ef3cb4fc484e3d1ddd19b95eb904f644b184443977e2", Pod:"coredns-7db6d8ff4d-gf7n9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.45.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali893895766a1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.489 [INFO][5038] k8s.go 608: Cleaning up netns ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.489 [INFO][5038] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" iface="eth0" netns="" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.489 [INFO][5038] k8s.go 615: Releasing IP address(es) ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.489 [INFO][5038] utils.go 188: Calico CNI releasing IP address ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.511 [INFO][5045] ipam_plugin.go 417: Releasing address using handleID ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.511 [INFO][5045] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.511 [INFO][5045] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.522 [WARNING][5045] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.522 [INFO][5045] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" HandleID="k8s-pod-network.ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Workload="ci--3975--2--2--5--28a2d443fc-k8s-coredns--7db6d8ff4d--gf7n9-eth0" Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.525 [INFO][5045] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:13.530043 containerd[1464]: 2024-10-08 19:44:13.527 [INFO][5038] k8s.go 621: Teardown processing complete. ContainerID="ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e" Oct 8 19:44:13.530660 containerd[1464]: time="2024-10-08T19:44:13.530259528Z" level=info msg="TearDown network for sandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\" successfully" Oct 8 19:44:13.536521 containerd[1464]: time="2024-10-08T19:44:13.536458982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:44:13.536668 containerd[1464]: time="2024-10-08T19:44:13.536564543Z" level=info msg="RemovePodSandbox \"ca0f6b477e61fb3dc388a81d71ba6b4af1612be98a8e466abdfc0ddefbb14b4e\" returns successfully" Oct 8 19:44:13.792869 systemd[1]: run-containerd-runc-k8s.io-ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454-runc.WqBNag.mount: Deactivated successfully. Oct 8 19:44:19.235386 kubelet[2728]: I1008 19:44:19.234102 2728 topology_manager.go:215] "Topology Admit Handler" podUID="73469108-4914-415f-8805-ecfcd8958b79" podNamespace="calico-apiserver" podName="calico-apiserver-c644885fc-hxv6x" Oct 8 19:44:19.245881 systemd[1]: Created slice kubepods-besteffort-pod73469108_4914_415f_8805_ecfcd8958b79.slice - libcontainer container kubepods-besteffort-pod73469108_4914_415f_8805_ecfcd8958b79.slice. Oct 8 19:44:19.278464 kubelet[2728]: I1008 19:44:19.277935 2728 topology_manager.go:215] "Topology Admit Handler" podUID="47b1c46e-b116-43e8-97ac-a60a6ad4a0b4" podNamespace="calico-apiserver" podName="calico-apiserver-c644885fc-2bv4j" Oct 8 19:44:19.289849 systemd[1]: Created slice kubepods-besteffort-pod47b1c46e_b116_43e8_97ac_a60a6ad4a0b4.slice - libcontainer container kubepods-besteffort-pod47b1c46e_b116_43e8_97ac_a60a6ad4a0b4.slice. Oct 8 19:44:19.306626 kubelet[2728]: I1008 19:44:19.306533 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/73469108-4914-415f-8805-ecfcd8958b79-calico-apiserver-certs\") pod \"calico-apiserver-c644885fc-hxv6x\" (UID: \"73469108-4914-415f-8805-ecfcd8958b79\") " pod="calico-apiserver/calico-apiserver-c644885fc-hxv6x" Oct 8 19:44:19.306626 kubelet[2728]: I1008 19:44:19.306581 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqrhn\" (UniqueName: \"kubernetes.io/projected/73469108-4914-415f-8805-ecfcd8958b79-kube-api-access-cqrhn\") pod \"calico-apiserver-c644885fc-hxv6x\" (UID: \"73469108-4914-415f-8805-ecfcd8958b79\") " pod="calico-apiserver/calico-apiserver-c644885fc-hxv6x" Oct 8 19:44:19.408049 kubelet[2728]: I1008 19:44:19.407347 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wz4g\" (UniqueName: \"kubernetes.io/projected/47b1c46e-b116-43e8-97ac-a60a6ad4a0b4-kube-api-access-4wz4g\") pod \"calico-apiserver-c644885fc-2bv4j\" (UID: \"47b1c46e-b116-43e8-97ac-a60a6ad4a0b4\") " pod="calico-apiserver/calico-apiserver-c644885fc-2bv4j" Oct 8 19:44:19.408049 kubelet[2728]: I1008 19:44:19.407424 2728 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/47b1c46e-b116-43e8-97ac-a60a6ad4a0b4-calico-apiserver-certs\") pod \"calico-apiserver-c644885fc-2bv4j\" (UID: \"47b1c46e-b116-43e8-97ac-a60a6ad4a0b4\") " pod="calico-apiserver/calico-apiserver-c644885fc-2bv4j" Oct 8 19:44:19.408049 kubelet[2728]: E1008 19:44:19.407663 2728 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:44:19.408049 kubelet[2728]: E1008 19:44:19.407779 2728 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/73469108-4914-415f-8805-ecfcd8958b79-calico-apiserver-certs podName:73469108-4914-415f-8805-ecfcd8958b79 nodeName:}" failed. No retries permitted until 2024-10-08 19:44:19.907741759 +0000 UTC m=+67.241214303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/73469108-4914-415f-8805-ecfcd8958b79-calico-apiserver-certs") pod "calico-apiserver-c644885fc-hxv6x" (UID: "73469108-4914-415f-8805-ecfcd8958b79") : secret "calico-apiserver-certs" not found Oct 8 19:44:19.595217 containerd[1464]: time="2024-10-08T19:44:19.593603564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c644885fc-2bv4j,Uid:47b1c46e-b116-43e8-97ac-a60a6ad4a0b4,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:44:19.778760 systemd-networkd[1373]: cali256a79c485c: Link UP Oct 8 19:44:19.778962 systemd-networkd[1373]: cali256a79c485c: Gained carrier Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.660 [INFO][5095] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0 calico-apiserver-c644885fc- calico-apiserver 47b1c46e-b116-43e8-97ac-a60a6ad4a0b4 862 0 2024-10-08 19:44:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c644885fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-5-28a2d443fc calico-apiserver-c644885fc-2bv4j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali256a79c485c [] []}} ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.660 [INFO][5095] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.705 [INFO][5103] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" HandleID="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.721 [INFO][5103] ipam_plugin.go 270: Auto assigning IP ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" HandleID="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000346ba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-5-28a2d443fc", "pod":"calico-apiserver-c644885fc-2bv4j", "timestamp":"2024-10-08 19:44:19.705566483 +0000 UTC"}, Hostname:"ci-3975-2-2-5-28a2d443fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.721 [INFO][5103] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.721 [INFO][5103] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.721 [INFO][5103] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-5-28a2d443fc' Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.725 [INFO][5103] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.736 [INFO][5103] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.745 [INFO][5103] ipam.go 489: Trying affinity for 192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.747 [INFO][5103] ipam.go 155: Attempting to load block cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.751 [INFO][5103] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.751 [INFO][5103] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.755 [INFO][5103] ipam.go 1685: Creating new handle: k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8 Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.760 [INFO][5103] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.772 [INFO][5103] ipam.go 1216: Successfully claimed IPs: [192.168.45.197/26] block=192.168.45.192/26 handle="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.772 [INFO][5103] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.197/26] handle="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.772 [INFO][5103] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:19.806533 containerd[1464]: 2024-10-08 19:44:19.772 [INFO][5103] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.45.197/26] IPv6=[] ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" HandleID="k8s-pod-network.775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" Oct 8 19:44:19.809858 containerd[1464]: 2024-10-08 19:44:19.775 [INFO][5095] k8s.go 386: Populated endpoint ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0", GenerateName:"calico-apiserver-c644885fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"47b1c46e-b116-43e8-97ac-a60a6ad4a0b4", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c644885fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"", Pod:"calico-apiserver-c644885fc-2bv4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali256a79c485c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:19.809858 containerd[1464]: 2024-10-08 19:44:19.775 [INFO][5095] k8s.go 387: Calico CNI using IPs: [192.168.45.197/32] ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" Oct 8 19:44:19.809858 containerd[1464]: 2024-10-08 19:44:19.776 [INFO][5095] dataplane_linux.go 68: Setting the host side veth name to cali256a79c485c ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" Oct 8 19:44:19.809858 containerd[1464]: 2024-10-08 19:44:19.779 [INFO][5095] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" Oct 8 19:44:19.809858 containerd[1464]: 2024-10-08 19:44:19.784 [INFO][5095] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0", GenerateName:"calico-apiserver-c644885fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"47b1c46e-b116-43e8-97ac-a60a6ad4a0b4", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c644885fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8", Pod:"calico-apiserver-c644885fc-2bv4j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali256a79c485c", MAC:"4a:8b:f0:bd:f6:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:19.809858 containerd[1464]: 2024-10-08 19:44:19.798 [INFO][5095] k8s.go 500: Wrote updated endpoint to datastore ContainerID="775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-2bv4j" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--2bv4j-eth0" Oct 8 19:44:19.839321 containerd[1464]: time="2024-10-08T19:44:19.838008330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:19.839321 containerd[1464]: time="2024-10-08T19:44:19.838796137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:19.839321 containerd[1464]: time="2024-10-08T19:44:19.838883057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:19.839321 containerd[1464]: time="2024-10-08T19:44:19.838909578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:19.862110 systemd[1]: Started cri-containerd-775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8.scope - libcontainer container 775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8. Oct 8 19:44:19.915758 containerd[1464]: time="2024-10-08T19:44:19.915607527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c644885fc-2bv4j,Uid:47b1c46e-b116-43e8-97ac-a60a6ad4a0b4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8\"" Oct 8 19:44:19.919927 containerd[1464]: time="2024-10-08T19:44:19.919788522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:44:20.152757 containerd[1464]: time="2024-10-08T19:44:20.152583820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c644885fc-hxv6x,Uid:73469108-4914-415f-8805-ecfcd8958b79,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:44:20.481544 systemd-networkd[1373]: cali038f920dba5: Link UP Oct 8 19:44:20.482796 systemd-networkd[1373]: cali038f920dba5: Gained carrier Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.213 [INFO][5185] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0 calico-apiserver-c644885fc- calico-apiserver 73469108-4914-415f-8805-ecfcd8958b79 859 0 2024-10-08 19:44:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c644885fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-5-28a2d443fc calico-apiserver-c644885fc-hxv6x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali038f920dba5 [] []}} ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.213 [INFO][5185] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.248 [INFO][5195] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" HandleID="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.264 [INFO][5195] ipam_plugin.go 270: Auto assigning IP ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" HandleID="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263e30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-5-28a2d443fc", "pod":"calico-apiserver-c644885fc-hxv6x", "timestamp":"2024-10-08 19:44:20.248921562 +0000 UTC"}, Hostname:"ci-3975-2-2-5-28a2d443fc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.264 [INFO][5195] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.264 [INFO][5195] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.264 [INFO][5195] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-5-28a2d443fc' Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.266 [INFO][5195] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.273 [INFO][5195] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.279 [INFO][5195] ipam.go 489: Trying affinity for 192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.285 [INFO][5195] ipam.go 155: Attempting to load block cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.289 [INFO][5195] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.45.192/26 host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.289 [INFO][5195] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.45.192/26 handle="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.293 [INFO][5195] ipam.go 1685: Creating new handle: k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.333 [INFO][5195] ipam.go 1203: Writing block in order to claim IPs block=192.168.45.192/26 handle="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.470 [INFO][5195] ipam.go 1216: Successfully claimed IPs: [192.168.45.198/26] block=192.168.45.192/26 handle="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.471 [INFO][5195] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.45.198/26] handle="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" host="ci-3975-2-2-5-28a2d443fc" Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.471 [INFO][5195] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:44:20.523888 containerd[1464]: 2024-10-08 19:44:20.471 [INFO][5195] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.45.198/26] IPv6=[] ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" HandleID="k8s-pod-network.29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Workload="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" Oct 8 19:44:20.526721 containerd[1464]: 2024-10-08 19:44:20.475 [INFO][5185] k8s.go 386: Populated endpoint ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0", GenerateName:"calico-apiserver-c644885fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"73469108-4914-415f-8805-ecfcd8958b79", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c644885fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"", Pod:"calico-apiserver-c644885fc-hxv6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali038f920dba5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:20.526721 containerd[1464]: 2024-10-08 19:44:20.475 [INFO][5185] k8s.go 387: Calico CNI using IPs: [192.168.45.198/32] ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" Oct 8 19:44:20.526721 containerd[1464]: 2024-10-08 19:44:20.475 [INFO][5185] dataplane_linux.go 68: Setting the host side veth name to cali038f920dba5 ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" Oct 8 19:44:20.526721 containerd[1464]: 2024-10-08 19:44:20.483 [INFO][5185] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" Oct 8 19:44:20.526721 containerd[1464]: 2024-10-08 19:44:20.485 [INFO][5185] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0", GenerateName:"calico-apiserver-c644885fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"73469108-4914-415f-8805-ecfcd8958b79", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c644885fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-5-28a2d443fc", ContainerID:"29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc", Pod:"calico-apiserver-c644885fc-hxv6x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.45.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali038f920dba5", MAC:"be:33:ab:91:ad:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:44:20.526721 containerd[1464]: 2024-10-08 19:44:20.517 [INFO][5185] k8s.go 500: Wrote updated endpoint to datastore ContainerID="29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc" Namespace="calico-apiserver" Pod="calico-apiserver-c644885fc-hxv6x" WorkloadEndpoint="ci--3975--2--2--5--28a2d443fc-k8s-calico--apiserver--c644885fc--hxv6x-eth0" Oct 8 19:44:20.557801 containerd[1464]: time="2024-10-08T19:44:20.557559949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:44:20.558385 containerd[1464]: time="2024-10-08T19:44:20.558276715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:20.558385 containerd[1464]: time="2024-10-08T19:44:20.558341916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:44:20.558582 containerd[1464]: time="2024-10-08T19:44:20.558499197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:44:20.586574 systemd[1]: run-containerd-runc-k8s.io-29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc-runc.rRpqys.mount: Deactivated successfully. Oct 8 19:44:20.596608 systemd[1]: Started cri-containerd-29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc.scope - libcontainer container 29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc. Oct 8 19:44:20.638866 containerd[1464]: time="2024-10-08T19:44:20.638745249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c644885fc-hxv6x,Uid:73469108-4914-415f-8805-ecfcd8958b79,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc\"" Oct 8 19:44:21.039531 systemd-networkd[1373]: cali256a79c485c: Gained IPv6LL Oct 8 19:44:22.016954 containerd[1464]: time="2024-10-08T19:44:22.016887440Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:22.019027 containerd[1464]: time="2024-10-08T19:44:22.018334372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:44:22.019027 containerd[1464]: time="2024-10-08T19:44:22.018966217Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:22.022133 containerd[1464]: time="2024-10-08T19:44:22.022069041Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:22.023236 containerd[1464]: time="2024-10-08T19:44:22.023187090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.103341928s" Oct 8 19:44:22.023236 containerd[1464]: time="2024-10-08T19:44:22.023231251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:44:22.024735 containerd[1464]: time="2024-10-08T19:44:22.024653782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:44:22.027824 containerd[1464]: time="2024-10-08T19:44:22.026875800Z" level=info msg="CreateContainer within sandbox \"775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:44:22.050665 containerd[1464]: time="2024-10-08T19:44:22.050581589Z" level=info msg="CreateContainer within sandbox \"775b1fcb2416659fa8a23b014a60536cf490884cd8cbdd13dd1bc1c0e17b74b8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a38ad21356cb24480a6c99e05eca3deeedba68e94fcc8c5df63b14b38745a812\"" Oct 8 19:44:22.051304 containerd[1464]: time="2024-10-08T19:44:22.051279914Z" level=info msg="StartContainer for \"a38ad21356cb24480a6c99e05eca3deeedba68e94fcc8c5df63b14b38745a812\"" Oct 8 19:44:22.109623 systemd[1]: Started cri-containerd-a38ad21356cb24480a6c99e05eca3deeedba68e94fcc8c5df63b14b38745a812.scope - libcontainer container a38ad21356cb24480a6c99e05eca3deeedba68e94fcc8c5df63b14b38745a812. Oct 8 19:44:22.152990 containerd[1464]: time="2024-10-08T19:44:22.152861563Z" level=info msg="StartContainer for \"a38ad21356cb24480a6c99e05eca3deeedba68e94fcc8c5df63b14b38745a812\" returns successfully" Oct 8 19:44:22.177568 kubelet[2728]: I1008 19:44:22.177388 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c644885fc-2bv4j" podStartSLOduration=1.0702992 podStartE2EDuration="3.177354758s" podCreationTimestamp="2024-10-08 19:44:19 +0000 UTC" firstStartedPulling="2024-10-08 19:44:19.917347422 +0000 UTC m=+67.250819966" lastFinishedPulling="2024-10-08 19:44:22.02440298 +0000 UTC m=+69.357875524" observedRunningTime="2024-10-08 19:44:22.176196549 +0000 UTC m=+69.509669093" watchObservedRunningTime="2024-10-08 19:44:22.177354758 +0000 UTC m=+69.510827302" Oct 8 19:44:22.320578 systemd-networkd[1373]: cali038f920dba5: Gained IPv6LL Oct 8 19:44:22.436066 containerd[1464]: time="2024-10-08T19:44:22.435972697Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:44:22.439872 containerd[1464]: time="2024-10-08T19:44:22.439825368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 19:44:22.442252 containerd[1464]: time="2024-10-08T19:44:22.442196747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 417.450964ms" Oct 8 19:44:22.442468 containerd[1464]: time="2024-10-08T19:44:22.442447629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:44:22.445409 containerd[1464]: time="2024-10-08T19:44:22.445331052Z" level=info msg="CreateContainer within sandbox \"29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:44:22.458589 containerd[1464]: time="2024-10-08T19:44:22.458545757Z" level=info msg="CreateContainer within sandbox \"29ae55c89880a6705b4513a7968b0e89a3ef1cdf4e352ead789714fc4cffabcc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"30f811b297c728c4ffddfcb2739008cacc08500ee3553d7e681ba9bf4dbc2960\"" Oct 8 19:44:22.459805 containerd[1464]: time="2024-10-08T19:44:22.459516405Z" level=info msg="StartContainer for \"30f811b297c728c4ffddfcb2739008cacc08500ee3553d7e681ba9bf4dbc2960\"" Oct 8 19:44:22.503692 systemd[1]: Started cri-containerd-30f811b297c728c4ffddfcb2739008cacc08500ee3553d7e681ba9bf4dbc2960.scope - libcontainer container 30f811b297c728c4ffddfcb2739008cacc08500ee3553d7e681ba9bf4dbc2960. Oct 8 19:44:22.555835 containerd[1464]: time="2024-10-08T19:44:22.555578690Z" level=info msg="StartContainer for \"30f811b297c728c4ffddfcb2739008cacc08500ee3553d7e681ba9bf4dbc2960\" returns successfully" Oct 8 19:44:24.218222 kubelet[2728]: I1008 19:44:24.216798 2728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c644885fc-hxv6x" podStartSLOduration=3.41443788 podStartE2EDuration="5.216777368s" podCreationTimestamp="2024-10-08 19:44:19 +0000 UTC" firstStartedPulling="2024-10-08 19:44:20.640804346 +0000 UTC m=+67.974276890" lastFinishedPulling="2024-10-08 19:44:22.443143834 +0000 UTC m=+69.776616378" observedRunningTime="2024-10-08 19:44:23.184756446 +0000 UTC m=+70.518228990" watchObservedRunningTime="2024-10-08 19:44:24.216777368 +0000 UTC m=+71.550249912" Oct 8 19:45:43.798213 systemd[1]: run-containerd-runc-k8s.io-ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454-runc.fTU315.mount: Deactivated successfully. Oct 8 19:47:42.660099 systemd[1]: run-containerd-runc-k8s.io-bac7f7c4ded82326647cd427bed1156e8b56957b963d1c339337f2c5c941d2e6-runc.CYsoY7.mount: Deactivated successfully. Oct 8 19:48:03.723238 systemd[1]: Started sshd@7-49.13.142.189:22-139.178.89.65:50498.service - OpenSSH per-connection server daemon (139.178.89.65:50498). Oct 8 19:48:04.691585 sshd[5887]: Accepted publickey for core from 139.178.89.65 port 50498 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:04.695797 sshd[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:04.702937 systemd-logind[1449]: New session 8 of user core. Oct 8 19:48:04.709753 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:48:05.361000 update_engine[1452]: I1008 19:48:05.360924 1452 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 19:48:05.361000 update_engine[1452]: I1008 19:48:05.360987 1452 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 19:48:05.361703 update_engine[1452]: I1008 19:48:05.361452 1452 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 19:48:05.362912 update_engine[1452]: I1008 19:48:05.362856 1452 omaha_request_params.cc:62] Current group set to stable Oct 8 19:48:05.363063 update_engine[1452]: I1008 19:48:05.363041 1452 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 19:48:05.363063 update_engine[1452]: I1008 19:48:05.363058 1452 update_attempter.cc:643] Scheduling an action processor start. Oct 8 19:48:05.363178 update_engine[1452]: I1008 19:48:05.363085 1452 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 19:48:05.363178 update_engine[1452]: I1008 19:48:05.363138 1452 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 19:48:05.363250 update_engine[1452]: I1008 19:48:05.363232 1452 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 19:48:05.363250 update_engine[1452]: I1008 19:48:05.363240 1452 omaha_request_action.cc:272] Request: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: Oct 8 19:48:05.363250 update_engine[1452]: I1008 19:48:05.363246 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:48:05.363978 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 19:48:05.369849 update_engine[1452]: I1008 19:48:05.369805 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:48:05.373825 update_engine[1452]: I1008 19:48:05.373759 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:48:05.377350 update_engine[1452]: E1008 19:48:05.377279 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:48:05.377515 update_engine[1452]: I1008 19:48:05.377434 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 19:48:05.485684 sshd[5887]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:05.489903 systemd[1]: sshd@7-49.13.142.189:22-139.178.89.65:50498.service: Deactivated successfully. Oct 8 19:48:05.496058 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:48:05.499107 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:48:05.501234 systemd-logind[1449]: Removed session 8. Oct 8 19:48:10.658793 systemd[1]: Started sshd@8-49.13.142.189:22-139.178.89.65:51826.service - OpenSSH per-connection server daemon (139.178.89.65:51826). Oct 8 19:48:11.629926 sshd[5903]: Accepted publickey for core from 139.178.89.65 port 51826 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:11.632032 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:11.636386 systemd-logind[1449]: New session 9 of user core. Oct 8 19:48:11.647618 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:48:12.374063 sshd[5903]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:12.378850 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:48:12.380315 systemd[1]: sshd@8-49.13.142.189:22-139.178.89.65:51826.service: Deactivated successfully. Oct 8 19:48:12.384933 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:48:12.389351 systemd-logind[1449]: Removed session 9. Oct 8 19:48:12.652964 systemd[1]: run-containerd-runc-k8s.io-bac7f7c4ded82326647cd427bed1156e8b56957b963d1c339337f2c5c941d2e6-runc.3D6Ko7.mount: Deactivated successfully. Oct 8 19:48:15.363384 update_engine[1452]: I1008 19:48:15.363289 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:48:15.363949 update_engine[1452]: I1008 19:48:15.363698 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:48:15.365184 update_engine[1452]: I1008 19:48:15.364058 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:48:15.365184 update_engine[1452]: E1008 19:48:15.364976 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:48:15.365184 update_engine[1452]: I1008 19:48:15.365031 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 19:48:17.548810 systemd[1]: Started sshd@9-49.13.142.189:22-139.178.89.65:48274.service - OpenSSH per-connection server daemon (139.178.89.65:48274). Oct 8 19:48:18.524252 sshd[5966]: Accepted publickey for core from 139.178.89.65 port 48274 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:18.525871 sshd[5966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:18.532537 systemd-logind[1449]: New session 10 of user core. Oct 8 19:48:18.542625 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:48:19.279304 sshd[5966]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:19.286165 systemd[1]: sshd@9-49.13.142.189:22-139.178.89.65:48274.service: Deactivated successfully. Oct 8 19:48:19.289690 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:48:19.292048 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:48:19.294044 systemd-logind[1449]: Removed session 10. Oct 8 19:48:19.464942 systemd[1]: Started sshd@10-49.13.142.189:22-139.178.89.65:48280.service - OpenSSH per-connection server daemon (139.178.89.65:48280). Oct 8 19:48:19.742488 systemd[1]: run-containerd-runc-k8s.io-ec9b20f537c74309d1e3c81b7350ca1c16f9c957f18f4c689af1858d89e66454-runc.oL98Ds.mount: Deactivated successfully. Oct 8 19:48:20.452411 sshd[5980]: Accepted publickey for core from 139.178.89.65 port 48280 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:20.454610 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:20.461227 systemd-logind[1449]: New session 11 of user core. Oct 8 19:48:20.465619 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:48:21.251837 sshd[5980]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:21.258326 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:48:21.261441 systemd[1]: sshd@10-49.13.142.189:22-139.178.89.65:48280.service: Deactivated successfully. Oct 8 19:48:21.266274 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:48:21.268044 systemd-logind[1449]: Removed session 11. Oct 8 19:48:21.421732 systemd[1]: Started sshd@11-49.13.142.189:22-139.178.89.65:48290.service - OpenSSH per-connection server daemon (139.178.89.65:48290). Oct 8 19:48:22.397051 sshd[6011]: Accepted publickey for core from 139.178.89.65 port 48290 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:22.398754 sshd[6011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:22.406015 systemd-logind[1449]: New session 12 of user core. Oct 8 19:48:22.411147 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:48:23.169015 sshd[6011]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:23.172504 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:48:23.173069 systemd[1]: sshd@11-49.13.142.189:22-139.178.89.65:48290.service: Deactivated successfully. Oct 8 19:48:23.175969 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:48:23.179873 systemd-logind[1449]: Removed session 12. Oct 8 19:48:25.361430 update_engine[1452]: I1008 19:48:25.361331 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:48:25.361941 update_engine[1452]: I1008 19:48:25.361681 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:48:25.362072 update_engine[1452]: I1008 19:48:25.362018 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:48:25.362962 update_engine[1452]: E1008 19:48:25.362917 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:48:25.363042 update_engine[1452]: I1008 19:48:25.363009 1452 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 8 19:48:28.349712 systemd[1]: Started sshd@12-49.13.142.189:22-139.178.89.65:59374.service - OpenSSH per-connection server daemon (139.178.89.65:59374). Oct 8 19:48:29.330967 sshd[6034]: Accepted publickey for core from 139.178.89.65 port 59374 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:29.333331 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:29.340185 systemd-logind[1449]: New session 13 of user core. Oct 8 19:48:29.349571 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:48:30.088771 sshd[6034]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:30.093503 systemd[1]: sshd@12-49.13.142.189:22-139.178.89.65:59374.service: Deactivated successfully. Oct 8 19:48:30.096267 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:48:30.099380 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:48:30.100980 systemd-logind[1449]: Removed session 13. Oct 8 19:48:30.264500 systemd[1]: Started sshd@13-49.13.142.189:22-139.178.89.65:59378.service - OpenSSH per-connection server daemon (139.178.89.65:59378). Oct 8 19:48:31.250576 sshd[6048]: Accepted publickey for core from 139.178.89.65 port 59378 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:31.253052 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:31.260531 systemd-logind[1449]: New session 14 of user core. Oct 8 19:48:31.271613 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:48:31.471787 systemd[1]: Started sshd@14-49.13.142.189:22-2.228.129.230:35362.service - OpenSSH per-connection server daemon (2.228.129.230:35362). Oct 8 19:48:31.557984 sshd[6052]: Connection closed by 2.228.129.230 port 35362 [preauth] Oct 8 19:48:31.559355 systemd[1]: sshd@14-49.13.142.189:22-2.228.129.230:35362.service: Deactivated successfully. Oct 8 19:48:32.242797 sshd[6048]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:32.248058 systemd[1]: sshd@13-49.13.142.189:22-139.178.89.65:59378.service: Deactivated successfully. Oct 8 19:48:32.253002 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:48:32.255078 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:48:32.257493 systemd-logind[1449]: Removed session 14. Oct 8 19:48:32.416054 systemd[1]: Started sshd@15-49.13.142.189:22-139.178.89.65:59390.service - OpenSSH per-connection server daemon (139.178.89.65:59390). Oct 8 19:48:33.380782 sshd[6064]: Accepted publickey for core from 139.178.89.65 port 59390 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:33.384148 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:33.393544 systemd-logind[1449]: New session 15 of user core. Oct 8 19:48:33.396837 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:48:35.363401 update_engine[1452]: I1008 19:48:35.361398 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:48:35.363401 update_engine[1452]: I1008 19:48:35.361609 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:48:35.363401 update_engine[1452]: I1008 19:48:35.361927 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:48:35.364020 update_engine[1452]: E1008 19:48:35.363989 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:48:35.364093 update_engine[1452]: I1008 19:48:35.364073 1452 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 19:48:35.364093 update_engine[1452]: I1008 19:48:35.364087 1452 omaha_request_action.cc:617] Omaha request response: Oct 8 19:48:35.364205 update_engine[1452]: E1008 19:48:35.364187 1452 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 8 19:48:35.364238 update_engine[1452]: I1008 19:48:35.364212 1452 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 8 19:48:35.364238 update_engine[1452]: I1008 19:48:35.364219 1452 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 19:48:35.364238 update_engine[1452]: I1008 19:48:35.364223 1452 update_attempter.cc:306] Processing Done. Oct 8 19:48:35.364317 update_engine[1452]: E1008 19:48:35.364243 1452 update_attempter.cc:619] Update failed. Oct 8 19:48:35.364317 update_engine[1452]: I1008 19:48:35.364248 1452 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 8 19:48:35.364317 update_engine[1452]: I1008 19:48:35.364251 1452 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 8 19:48:35.364317 update_engine[1452]: I1008 19:48:35.364256 1452 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 8 19:48:35.364645 update_engine[1452]: I1008 19:48:35.364339 1452 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 19:48:35.364645 update_engine[1452]: I1008 19:48:35.364459 1452 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 19:48:35.364645 update_engine[1452]: I1008 19:48:35.364470 1452 omaha_request_action.cc:272] Request: Oct 8 19:48:35.364645 update_engine[1452]: Oct 8 19:48:35.364645 update_engine[1452]: Oct 8 19:48:35.364645 update_engine[1452]: Oct 8 19:48:35.364645 update_engine[1452]: Oct 8 19:48:35.364645 update_engine[1452]: Oct 8 19:48:35.364645 update_engine[1452]: Oct 8 19:48:35.364645 update_engine[1452]: I1008 19:48:35.364475 1452 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:48:35.364828 update_engine[1452]: I1008 19:48:35.364665 1452 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:48:35.366377 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.365394 1452 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:48:35.366745 update_engine[1452]: E1008 19:48:35.366060 1452 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.366118 1452 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.366124 1452 omaha_request_action.cc:617] Omaha request response: Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.366130 1452 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.366137 1452 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.366140 1452 update_attempter.cc:306] Processing Done. Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.366146 1452 update_attempter.cc:310] Error event sent. Oct 8 19:48:35.366745 update_engine[1452]: I1008 19:48:35.366154 1452 update_check_scheduler.cc:74] Next update check in 49m3s Oct 8 19:48:35.366943 locksmithd[1493]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 8 19:48:36.374570 sshd[6064]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:36.385462 systemd[1]: sshd@15-49.13.142.189:22-139.178.89.65:59390.service: Deactivated successfully. Oct 8 19:48:36.389209 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:48:36.392325 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:48:36.394856 systemd-logind[1449]: Removed session 15. Oct 8 19:48:36.550741 systemd[1]: Started sshd@16-49.13.142.189:22-139.178.89.65:51972.service - OpenSSH per-connection server daemon (139.178.89.65:51972). Oct 8 19:48:37.544300 sshd[6087]: Accepted publickey for core from 139.178.89.65 port 51972 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:37.546292 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:37.553106 systemd-logind[1449]: New session 16 of user core. Oct 8 19:48:37.557608 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:48:38.433979 sshd[6087]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:38.438902 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:48:38.439098 systemd[1]: sshd@16-49.13.142.189:22-139.178.89.65:51972.service: Deactivated successfully. Oct 8 19:48:38.442457 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:48:38.446966 systemd-logind[1449]: Removed session 16. Oct 8 19:48:38.606874 systemd[1]: Started sshd@17-49.13.142.189:22-139.178.89.65:51980.service - OpenSSH per-connection server daemon (139.178.89.65:51980). Oct 8 19:48:39.575003 sshd[6109]: Accepted publickey for core from 139.178.89.65 port 51980 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:39.577283 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:39.584480 systemd-logind[1449]: New session 17 of user core. Oct 8 19:48:39.591624 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:48:40.316937 sshd[6109]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:40.323006 systemd[1]: sshd@17-49.13.142.189:22-139.178.89.65:51980.service: Deactivated successfully. Oct 8 19:48:40.327129 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:48:40.328241 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:48:40.329876 systemd-logind[1449]: Removed session 17. Oct 8 19:48:45.491500 systemd[1]: Started sshd@18-49.13.142.189:22-139.178.89.65:47654.service - OpenSSH per-connection server daemon (139.178.89.65:47654). Oct 8 19:48:46.452669 sshd[6171]: Accepted publickey for core from 139.178.89.65 port 47654 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:46.457521 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:46.470493 systemd-logind[1449]: New session 18 of user core. Oct 8 19:48:46.476765 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:48:47.220541 sshd[6171]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:47.226845 systemd[1]: sshd@18-49.13.142.189:22-139.178.89.65:47654.service: Deactivated successfully. Oct 8 19:48:47.231767 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:48:47.233481 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:48:47.236000 systemd-logind[1449]: Removed session 18. Oct 8 19:48:52.412115 systemd[1]: Started sshd@19-49.13.142.189:22-139.178.89.65:47666.service - OpenSSH per-connection server daemon (139.178.89.65:47666). Oct 8 19:48:53.408037 sshd[6184]: Accepted publickey for core from 139.178.89.65 port 47666 ssh2: RSA SHA256:RD/Z11mwPpPLwRTLIDyFYwah0kCc69nHZ3139qs3LRw Oct 8 19:48:53.410026 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:48:53.416343 systemd-logind[1449]: New session 19 of user core. Oct 8 19:48:53.421606 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:48:54.162234 sshd[6184]: pam_unix(sshd:session): session closed for user core Oct 8 19:48:54.167270 systemd[1]: sshd@19-49.13.142.189:22-139.178.89.65:47666.service: Deactivated successfully. Oct 8 19:48:54.171136 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:48:54.173452 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:48:54.175691 systemd-logind[1449]: Removed session 19. Oct 8 19:49:02.841842 systemd[1]: Started sshd@20-49.13.142.189:22-80.64.30.139:19530.service - OpenSSH per-connection server daemon (80.64.30.139:19530). Oct 8 19:49:03.770448 sshd[6204]: Invalid user pi from 80.64.30.139 port 19530 Oct 8 19:49:04.117349 sshd[6204]: Connection closed by invalid user pi 80.64.30.139 port 19530 [preauth] Oct 8 19:49:04.121134 systemd[1]: sshd@20-49.13.142.189:22-80.64.30.139:19530.service: Deactivated successfully.