Oct 8 20:12:06.928959 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 8 20:12:06.928981 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 20:12:06.928990 kernel: KASLR enabled Oct 8 20:12:06.928996 kernel: efi: EFI v2.7 by EDK II Oct 8 20:12:06.929002 kernel: efi: SMBIOS 3.0=0x135ed0000 MEMATTR=0x133d4f698 ACPI 2.0=0x132430018 RNG=0x13243e918 MEMRESERVE=0x13232ed18 Oct 8 20:12:06.929011 kernel: random: crng init done Oct 8 20:12:06.929019 kernel: ACPI: Early table checksum verification disabled Oct 8 20:12:06.929025 kernel: ACPI: RSDP 0x0000000132430018 000024 (v02 BOCHS ) Oct 8 20:12:06.929031 kernel: ACPI: XSDT 0x000000013243FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Oct 8 20:12:06.929037 kernel: ACPI: FACP 0x000000013243FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929045 kernel: ACPI: DSDT 0x0000000132437518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929050 kernel: ACPI: APIC 0x000000013243FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929056 kernel: ACPI: PPTT 0x000000013243FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929062 kernel: ACPI: GTDT 0x000000013243D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929070 kernel: ACPI: MCFG 0x000000013243FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929078 kernel: ACPI: SPCR 0x000000013243E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929085 kernel: ACPI: DBG2 0x000000013243E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929091 kernel: ACPI: IORT 0x000000013243E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 8 20:12:06.929097 kernel: ACPI: BGRT 0x000000013243E798 000038 (v01 INTEL EDK2 00000002 01000013) Oct 8 20:12:06.929104 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Oct 8 20:12:06.929110 kernel: NUMA: Failed to initialise from firmware Oct 8 20:12:06.929116 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 20:12:06.929123 kernel: NUMA: NODE_DATA [mem 0x13981f800-0x139824fff] Oct 8 20:12:06.929129 kernel: Zone ranges: Oct 8 20:12:06.929135 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 20:12:06.929142 kernel: DMA32 empty Oct 8 20:12:06.929149 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Oct 8 20:12:06.929155 kernel: Movable zone start for each node Oct 8 20:12:06.929162 kernel: Early memory node ranges Oct 8 20:12:06.929168 kernel: node 0: [mem 0x0000000040000000-0x000000013243ffff] Oct 8 20:12:06.929174 kernel: node 0: [mem 0x0000000132440000-0x000000013272ffff] Oct 8 20:12:06.929180 kernel: node 0: [mem 0x0000000132730000-0x0000000135bfffff] Oct 8 20:12:06.929186 kernel: node 0: [mem 0x0000000135c00000-0x0000000135fdffff] Oct 8 20:12:06.929193 kernel: node 0: [mem 0x0000000135fe0000-0x0000000139ffffff] Oct 8 20:12:06.929199 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Oct 8 20:12:06.929205 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Oct 8 20:12:06.929211 kernel: psci: probing for conduit method from ACPI. Oct 8 20:12:06.929219 kernel: psci: PSCIv1.1 detected in firmware. Oct 8 20:12:06.929225 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 20:12:06.929232 kernel: psci: Trusted OS migration not required Oct 8 20:12:06.929241 kernel: psci: SMC Calling Convention v1.1 Oct 8 20:12:06.929247 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 8 20:12:06.929254 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 20:12:06.929262 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 20:12:06.929269 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 20:12:06.929276 kernel: Detected PIPT I-cache on CPU0 Oct 8 20:12:06.929282 kernel: CPU features: detected: GIC system register CPU interface Oct 8 20:12:06.929289 kernel: CPU features: detected: Hardware dirty bit management Oct 8 20:12:06.929296 kernel: CPU features: detected: Spectre-v4 Oct 8 20:12:06.929303 kernel: CPU features: detected: Spectre-BHB Oct 8 20:12:06.929416 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 8 20:12:06.929425 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 8 20:12:06.929432 kernel: CPU features: detected: ARM erratum 1418040 Oct 8 20:12:06.929438 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 8 20:12:06.929448 kernel: alternatives: applying boot alternatives Oct 8 20:12:06.929456 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 20:12:06.929463 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 20:12:06.929470 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 20:12:06.929477 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 20:12:06.929483 kernel: Fallback order for Node 0: 0 Oct 8 20:12:06.929490 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Oct 8 20:12:06.929497 kernel: Policy zone: Normal Oct 8 20:12:06.929503 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 20:12:06.929510 kernel: software IO TLB: area num 2. Oct 8 20:12:06.929517 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Oct 8 20:12:06.929525 kernel: Memory: 3881848K/4096000K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 214152K reserved, 0K cma-reserved) Oct 8 20:12:06.929532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 20:12:06.929539 kernel: trace event string verifier disabled Oct 8 20:12:06.929546 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 20:12:06.929553 kernel: rcu: RCU event tracing is enabled. Oct 8 20:12:06.929560 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 20:12:06.929567 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 20:12:06.929574 kernel: Tracing variant of Tasks RCU enabled. Oct 8 20:12:06.929580 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 20:12:06.929587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 20:12:06.929594 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 20:12:06.929602 kernel: GICv3: 256 SPIs implemented Oct 8 20:12:06.929609 kernel: GICv3: 0 Extended SPIs implemented Oct 8 20:12:06.929615 kernel: Root IRQ handler: gic_handle_irq Oct 8 20:12:06.929622 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 8 20:12:06.929629 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 8 20:12:06.929635 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 8 20:12:06.929642 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 20:12:06.929649 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 20:12:06.929656 kernel: GICv3: using LPI property table @0x00000001000e0000 Oct 8 20:12:06.929663 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Oct 8 20:12:06.929670 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 20:12:06.929678 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:12:06.929685 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 8 20:12:06.929692 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 8 20:12:06.929699 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 8 20:12:06.929705 kernel: Console: colour dummy device 80x25 Oct 8 20:12:06.929712 kernel: ACPI: Core revision 20230628 Oct 8 20:12:06.929719 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 8 20:12:06.929726 kernel: pid_max: default: 32768 minimum: 301 Oct 8 20:12:06.929733 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 20:12:06.929740 kernel: SELinux: Initializing. Oct 8 20:12:06.929748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:12:06.929755 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 20:12:06.929762 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:12:06.929770 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 20:12:06.929776 kernel: rcu: Hierarchical SRCU implementation. Oct 8 20:12:06.929783 kernel: rcu: Max phase no-delay instances is 400. Oct 8 20:12:06.929790 kernel: Platform MSI: ITS@0x8080000 domain created Oct 8 20:12:06.929797 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 8 20:12:06.929804 kernel: Remapping and enabling EFI services. Oct 8 20:12:06.929812 kernel: smp: Bringing up secondary CPUs ... Oct 8 20:12:06.929818 kernel: Detected PIPT I-cache on CPU1 Oct 8 20:12:06.929825 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 8 20:12:06.929832 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Oct 8 20:12:06.929839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 8 20:12:06.929846 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 8 20:12:06.929853 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 20:12:06.929860 kernel: SMP: Total of 2 processors activated. Oct 8 20:12:06.929867 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 20:12:06.929874 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 8 20:12:06.929882 kernel: CPU features: detected: Common not Private translations Oct 8 20:12:06.929889 kernel: CPU features: detected: CRC32 instructions Oct 8 20:12:06.929900 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 8 20:12:06.929914 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 8 20:12:06.929921 kernel: CPU features: detected: LSE atomic instructions Oct 8 20:12:06.929928 kernel: CPU features: detected: Privileged Access Never Oct 8 20:12:06.929935 kernel: CPU features: detected: RAS Extension Support Oct 8 20:12:06.929943 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 8 20:12:06.929950 kernel: CPU: All CPU(s) started at EL1 Oct 8 20:12:06.929959 kernel: alternatives: applying system-wide alternatives Oct 8 20:12:06.929966 kernel: devtmpfs: initialized Oct 8 20:12:06.929974 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 20:12:06.929981 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 20:12:06.929988 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 20:12:06.929995 kernel: SMBIOS 3.0.0 present. Oct 8 20:12:06.930002 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Oct 8 20:12:06.930011 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 20:12:06.930018 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 20:12:06.930026 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 20:12:06.930033 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 20:12:06.930041 kernel: audit: initializing netlink subsys (disabled) Oct 8 20:12:06.930048 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Oct 8 20:12:06.930055 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 20:12:06.930062 kernel: cpuidle: using governor menu Oct 8 20:12:06.930070 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 20:12:06.930079 kernel: ASID allocator initialised with 32768 entries Oct 8 20:12:06.930086 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 20:12:06.930093 kernel: Serial: AMBA PL011 UART driver Oct 8 20:12:06.930101 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 8 20:12:06.930108 kernel: Modules: 0 pages in range for non-PLT usage Oct 8 20:12:06.930115 kernel: Modules: 509104 pages in range for PLT usage Oct 8 20:12:06.930123 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 20:12:06.930130 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 20:12:06.930137 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 20:12:06.930146 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 20:12:06.930153 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 20:12:06.930160 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 20:12:06.930168 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 20:12:06.930175 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 20:12:06.930182 kernel: ACPI: Added _OSI(Module Device) Oct 8 20:12:06.930189 kernel: ACPI: Added _OSI(Processor Device) Oct 8 20:12:06.930196 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 20:12:06.930204 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 20:12:06.930212 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 20:12:06.930219 kernel: ACPI: Interpreter enabled Oct 8 20:12:06.930226 kernel: ACPI: Using GIC for interrupt routing Oct 8 20:12:06.930234 kernel: ACPI: MCFG table detected, 1 entries Oct 8 20:12:06.930241 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 8 20:12:06.930248 kernel: printk: console [ttyAMA0] enabled Oct 8 20:12:06.930255 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 8 20:12:06.930460 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 20:12:06.930544 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 20:12:06.930608 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 20:12:06.930670 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 8 20:12:06.930732 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 8 20:12:06.930742 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 8 20:12:06.930749 kernel: PCI host bridge to bus 0000:00 Oct 8 20:12:06.930819 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 8 20:12:06.930878 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 20:12:06.930946 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 8 20:12:06.931009 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 8 20:12:06.931093 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 8 20:12:06.931187 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Oct 8 20:12:06.931255 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Oct 8 20:12:06.931336 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 20:12:06.931426 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.931495 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Oct 8 20:12:06.931568 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.931633 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Oct 8 20:12:06.931704 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.931768 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Oct 8 20:12:06.931848 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.931914 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Oct 8 20:12:06.931994 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.932061 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Oct 8 20:12:06.932132 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.932207 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Oct 8 20:12:06.932291 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.932373 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Oct 8 20:12:06.932447 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.932512 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Oct 8 20:12:06.932587 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Oct 8 20:12:06.932658 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Oct 8 20:12:06.932732 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Oct 8 20:12:06.932797 kernel: pci 0000:00:04.0: reg 0x10: [io 0x8200-0x8207] Oct 8 20:12:06.932880 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:12:06.932948 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Oct 8 20:12:06.933020 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:12:06.933099 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 20:12:06.933192 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 8 20:12:06.933273 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Oct 8 20:12:06.933363 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Oct 8 20:12:06.933438 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Oct 8 20:12:06.933506 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Oct 8 20:12:06.933579 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Oct 8 20:12:06.933647 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Oct 8 20:12:06.933724 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 8 20:12:06.933796 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Oct 8 20:12:06.933876 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Oct 8 20:12:06.933956 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Oct 8 20:12:06.934024 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 20:12:06.934099 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Oct 8 20:12:06.934170 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Oct 8 20:12:06.934244 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Oct 8 20:12:06.934318 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Oct 8 20:12:06.934404 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Oct 8 20:12:06.934472 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Oct 8 20:12:06.934548 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Oct 8 20:12:06.934628 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Oct 8 20:12:06.934704 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Oct 8 20:12:06.934793 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Oct 8 20:12:06.934885 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 8 20:12:06.934959 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Oct 8 20:12:06.935030 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Oct 8 20:12:06.935097 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 8 20:12:06.935161 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Oct 8 20:12:06.935226 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Oct 8 20:12:06.935296 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 8 20:12:06.935397 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Oct 8 20:12:06.935462 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 05] add_size 200000 add_align 100000 Oct 8 20:12:06.935526 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 8 20:12:06.935595 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Oct 8 20:12:06.935665 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Oct 8 20:12:06.935729 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 8 20:12:06.935796 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Oct 8 20:12:06.935859 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Oct 8 20:12:06.935931 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 8 20:12:06.936014 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Oct 8 20:12:06.936122 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Oct 8 20:12:06.936233 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 8 20:12:06.936320 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Oct 8 20:12:06.936401 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Oct 8 20:12:06.936481 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Oct 8 20:12:06.936552 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 20:12:06.936619 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Oct 8 20:12:06.936686 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 20:12:06.936752 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Oct 8 20:12:06.936831 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 20:12:06.936903 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Oct 8 20:12:06.936973 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 20:12:06.937044 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Oct 8 20:12:06.937110 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 20:12:06.937174 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Oct 8 20:12:06.937244 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 20:12:06.937380 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Oct 8 20:12:06.937461 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 20:12:06.937524 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Oct 8 20:12:06.937594 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 20:12:06.937658 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Oct 8 20:12:06.937721 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 20:12:06.937789 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Oct 8 20:12:06.937851 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Oct 8 20:12:06.937918 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Oct 8 20:12:06.937981 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 8 20:12:06.938045 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Oct 8 20:12:06.938108 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 8 20:12:06.938171 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Oct 8 20:12:06.938241 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 8 20:12:06.938306 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Oct 8 20:12:06.938399 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 8 20:12:06.938476 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Oct 8 20:12:06.938539 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 8 20:12:06.938608 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Oct 8 20:12:06.938673 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 8 20:12:06.938736 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Oct 8 20:12:06.938805 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 8 20:12:06.938881 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Oct 8 20:12:06.938944 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 8 20:12:06.939011 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Oct 8 20:12:06.939074 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Oct 8 20:12:06.939140 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Oct 8 20:12:06.939209 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Oct 8 20:12:06.939274 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 8 20:12:06.940092 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Oct 8 20:12:06.940179 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Oct 8 20:12:06.940264 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 8 20:12:06.940383 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Oct 8 20:12:06.940453 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 20:12:06.940524 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Oct 8 20:12:06.941451 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Oct 8 20:12:06.941533 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 8 20:12:06.941597 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Oct 8 20:12:06.941661 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 20:12:06.941732 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Oct 8 20:12:06.941814 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Oct 8 20:12:06.941880 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Oct 8 20:12:06.941950 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 8 20:12:06.942014 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Oct 8 20:12:06.942082 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 20:12:06.942161 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Oct 8 20:12:06.942228 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Oct 8 20:12:06.942306 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 8 20:12:06.943557 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Oct 8 20:12:06.943626 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 20:12:06.943700 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Oct 8 20:12:06.943778 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Oct 8 20:12:06.943852 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 8 20:12:06.943916 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Oct 8 20:12:06.943980 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 20:12:06.944052 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Oct 8 20:12:06.944125 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Oct 8 20:12:06.944232 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Oct 8 20:12:06.947547 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 8 20:12:06.947663 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Oct 8 20:12:06.947739 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 20:12:06.947814 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Oct 8 20:12:06.947883 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Oct 8 20:12:06.947951 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Oct 8 20:12:06.948017 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Oct 8 20:12:06.948089 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 8 20:12:06.948161 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Oct 8 20:12:06.948239 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 20:12:06.948324 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Oct 8 20:12:06.948395 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 8 20:12:06.948466 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Oct 8 20:12:06.948533 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 20:12:06.948617 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Oct 8 20:12:06.948683 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Oct 8 20:12:06.948747 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Oct 8 20:12:06.948821 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 20:12:06.948891 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 8 20:12:06.948957 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 20:12:06.949017 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 8 20:12:06.949087 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 8 20:12:06.949154 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Oct 8 20:12:06.949223 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Oct 8 20:12:06.949292 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Oct 8 20:12:06.949378 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Oct 8 20:12:06.949446 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Oct 8 20:12:06.949529 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Oct 8 20:12:06.949595 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Oct 8 20:12:06.949658 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Oct 8 20:12:06.949728 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Oct 8 20:12:06.949792 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Oct 8 20:12:06.949852 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Oct 8 20:12:06.949919 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Oct 8 20:12:06.949991 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Oct 8 20:12:06.950054 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Oct 8 20:12:06.950127 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Oct 8 20:12:06.950188 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Oct 8 20:12:06.950250 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Oct 8 20:12:06.950992 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Oct 8 20:12:06.951085 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Oct 8 20:12:06.951163 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Oct 8 20:12:06.951245 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Oct 8 20:12:06.951574 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Oct 8 20:12:06.951650 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Oct 8 20:12:06.951724 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Oct 8 20:12:06.951790 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Oct 8 20:12:06.951861 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Oct 8 20:12:06.951872 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 20:12:06.951884 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 20:12:06.951892 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 20:12:06.951900 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 20:12:06.951908 kernel: iommu: Default domain type: Translated Oct 8 20:12:06.951921 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 20:12:06.951929 kernel: efivars: Registered efivars operations Oct 8 20:12:06.951937 kernel: vgaarb: loaded Oct 8 20:12:06.951945 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 20:12:06.951953 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 20:12:06.951963 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 20:12:06.951971 kernel: pnp: PnP ACPI init Oct 8 20:12:06.952067 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 8 20:12:06.952080 kernel: pnp: PnP ACPI: found 1 devices Oct 8 20:12:06.952088 kernel: NET: Registered PF_INET protocol family Oct 8 20:12:06.952096 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 20:12:06.952104 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 20:12:06.952112 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 20:12:06.952122 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 20:12:06.952130 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 20:12:06.952138 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 20:12:06.952146 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:12:06.952154 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 20:12:06.952162 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 20:12:06.952256 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Oct 8 20:12:06.952269 kernel: PCI: CLS 0 bytes, default 64 Oct 8 20:12:06.952281 kernel: kvm [1]: HYP mode not available Oct 8 20:12:06.952292 kernel: Initialise system trusted keyrings Oct 8 20:12:06.952300 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 20:12:06.952307 kernel: Key type asymmetric registered Oct 8 20:12:06.952386 kernel: Asymmetric key parser 'x509' registered Oct 8 20:12:06.952394 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 20:12:06.952401 kernel: io scheduler mq-deadline registered Oct 8 20:12:06.952409 kernel: io scheduler kyber registered Oct 8 20:12:06.952417 kernel: io scheduler bfq registered Oct 8 20:12:06.952426 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 20:12:06.952518 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Oct 8 20:12:06.952590 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Oct 8 20:12:06.952657 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.952727 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Oct 8 20:12:06.952807 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Oct 8 20:12:06.952875 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.952947 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Oct 8 20:12:06.953019 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Oct 8 20:12:06.953094 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.953179 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Oct 8 20:12:06.953254 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Oct 8 20:12:06.953411 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.953493 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Oct 8 20:12:06.953560 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Oct 8 20:12:06.953625 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.953692 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Oct 8 20:12:06.953759 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Oct 8 20:12:06.953824 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.954414 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Oct 8 20:12:06.954491 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Oct 8 20:12:06.954573 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.954643 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Oct 8 20:12:06.954708 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Oct 8 20:12:06.954772 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.954787 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Oct 8 20:12:06.954857 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Oct 8 20:12:06.954930 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Oct 8 20:12:06.954995 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 8 20:12:06.955005 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 20:12:06.955013 kernel: ACPI: button: Power Button [PWRB] Oct 8 20:12:06.955021 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 20:12:06.955089 kernel: virtio-pci 0000:03:00.0: enabling device (0000 -> 0002) Oct 8 20:12:06.955173 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Oct 8 20:12:06.955252 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Oct 8 20:12:06.955263 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 20:12:06.955271 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 20:12:06.956414 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Oct 8 20:12:06.956435 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Oct 8 20:12:06.956443 kernel: thunder_xcv, ver 1.0 Oct 8 20:12:06.956451 kernel: thunder_bgx, ver 1.0 Oct 8 20:12:06.956463 kernel: nicpf, ver 1.0 Oct 8 20:12:06.956471 kernel: nicvf, ver 1.0 Oct 8 20:12:06.956554 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 20:12:06.956629 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T20:12:06 UTC (1728418326) Oct 8 20:12:06.956641 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 20:12:06.956649 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 8 20:12:06.956657 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 20:12:06.956665 kernel: watchdog: Hard watchdog permanently disabled Oct 8 20:12:06.956675 kernel: NET: Registered PF_INET6 protocol family Oct 8 20:12:06.956683 kernel: Segment Routing with IPv6 Oct 8 20:12:06.956691 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 20:12:06.956698 kernel: NET: Registered PF_PACKET protocol family Oct 8 20:12:06.956706 kernel: Key type dns_resolver registered Oct 8 20:12:06.956713 kernel: registered taskstats version 1 Oct 8 20:12:06.956721 kernel: Loading compiled-in X.509 certificates Oct 8 20:12:06.956729 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 20:12:06.956736 kernel: Key type .fscrypt registered Oct 8 20:12:06.956745 kernel: Key type fscrypt-provisioning registered Oct 8 20:12:06.956753 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 20:12:06.956761 kernel: ima: Allocated hash algorithm: sha1 Oct 8 20:12:06.956774 kernel: ima: No architecture policies found Oct 8 20:12:06.956782 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 20:12:06.956790 kernel: clk: Disabling unused clocks Oct 8 20:12:06.956801 kernel: Freeing unused kernel memory: 39104K Oct 8 20:12:06.956809 kernel: Run /init as init process Oct 8 20:12:06.956816 kernel: with arguments: Oct 8 20:12:06.956825 kernel: /init Oct 8 20:12:06.956833 kernel: with environment: Oct 8 20:12:06.956843 kernel: HOME=/ Oct 8 20:12:06.956851 kernel: TERM=linux Oct 8 20:12:06.956858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 20:12:06.956868 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:12:06.956878 systemd[1]: Detected virtualization kvm. Oct 8 20:12:06.956888 systemd[1]: Detected architecture arm64. Oct 8 20:12:06.956896 systemd[1]: Running in initrd. Oct 8 20:12:06.956904 systemd[1]: No hostname configured, using default hostname. Oct 8 20:12:06.956912 systemd[1]: Hostname set to . Oct 8 20:12:06.956923 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:12:06.956934 systemd[1]: Queued start job for default target initrd.target. Oct 8 20:12:06.956943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:12:06.956951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:12:06.956961 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 20:12:06.956970 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:12:06.956978 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 20:12:06.956989 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 20:12:06.956998 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 20:12:06.957007 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 20:12:06.957017 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:12:06.957027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:12:06.957039 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:12:06.957047 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:12:06.957055 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:12:06.957063 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:12:06.957071 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:12:06.957079 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:12:06.957087 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 20:12:06.957096 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 20:12:06.957105 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:12:06.957113 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:12:06.957121 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:12:06.957130 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:12:06.957138 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 20:12:06.957146 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:12:06.957155 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 20:12:06.957164 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 20:12:06.957173 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:12:06.957181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:12:06.957189 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:06.957198 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 20:12:06.957231 systemd-journald[236]: Collecting audit messages is disabled. Oct 8 20:12:06.957254 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:12:06.957262 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 20:12:06.957275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 20:12:06.957284 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 20:12:06.957294 systemd-journald[236]: Journal started Oct 8 20:12:06.958375 systemd-journald[236]: Runtime Journal (/run/log/journal/29ad8f7067db4a72bfe72a98776a8a4b) is 8.0M, max 76.5M, 68.5M free. Oct 8 20:12:06.958446 kernel: Bridge firewalling registered Oct 8 20:12:06.934919 systemd-modules-load[237]: Inserted module 'overlay' Oct 8 20:12:06.961411 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:12:06.958053 systemd-modules-load[237]: Inserted module 'br_netfilter' Oct 8 20:12:06.962154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:12:06.964791 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:06.965551 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 20:12:06.975474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:12:06.978071 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:12:06.981660 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:12:06.992063 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 20:12:07.003540 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:07.007724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:07.010425 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:12:07.012002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 20:12:07.017488 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 20:12:07.019841 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:12:07.036080 dracut-cmdline[273]: dracut-dracut-053 Oct 8 20:12:07.039323 dracut-cmdline[273]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 20:12:07.064798 systemd-resolved[275]: Positive Trust Anchors: Oct 8 20:12:07.065468 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:12:07.066245 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 20:12:07.074071 systemd-resolved[275]: Defaulting to hostname 'linux'. Oct 8 20:12:07.075118 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:12:07.075863 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:12:07.127358 kernel: SCSI subsystem initialized Oct 8 20:12:07.132344 kernel: Loading iSCSI transport class v2.0-870. Oct 8 20:12:07.139344 kernel: iscsi: registered transport (tcp) Oct 8 20:12:07.156360 kernel: iscsi: registered transport (qla4xxx) Oct 8 20:12:07.156455 kernel: QLogic iSCSI HBA Driver Oct 8 20:12:07.205486 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 20:12:07.219551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 20:12:07.241211 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 20:12:07.241272 kernel: device-mapper: uevent: version 1.0.3 Oct 8 20:12:07.241283 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 20:12:07.292373 kernel: raid6: neonx8 gen() 15467 MB/s Oct 8 20:12:07.309363 kernel: raid6: neonx4 gen() 10541 MB/s Oct 8 20:12:07.326359 kernel: raid6: neonx2 gen() 13101 MB/s Oct 8 20:12:07.343379 kernel: raid6: neonx1 gen() 10412 MB/s Oct 8 20:12:07.360350 kernel: raid6: int64x8 gen() 6937 MB/s Oct 8 20:12:07.377395 kernel: raid6: int64x4 gen() 7294 MB/s Oct 8 20:12:07.394372 kernel: raid6: int64x2 gen() 6105 MB/s Oct 8 20:12:07.411382 kernel: raid6: int64x1 gen() 5053 MB/s Oct 8 20:12:07.411465 kernel: raid6: using algorithm neonx8 gen() 15467 MB/s Oct 8 20:12:07.428631 kernel: raid6: .... xor() 11874 MB/s, rmw enabled Oct 8 20:12:07.428712 kernel: raid6: using neon recovery algorithm Oct 8 20:12:07.433377 kernel: xor: measuring software checksum speed Oct 8 20:12:07.433459 kernel: 8regs : 19821 MB/sec Oct 8 20:12:07.433495 kernel: 32regs : 14394 MB/sec Oct 8 20:12:07.434466 kernel: arm64_neon : 26857 MB/sec Oct 8 20:12:07.434509 kernel: xor: using function: arm64_neon (26857 MB/sec) Oct 8 20:12:07.491366 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 20:12:07.507396 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:12:07.514537 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:12:07.539094 systemd-udevd[457]: Using default interface naming scheme 'v255'. Oct 8 20:12:07.542856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:12:07.553510 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 20:12:07.570405 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Oct 8 20:12:07.606213 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:12:07.611493 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:12:07.669657 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:12:07.680751 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 20:12:07.702175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 20:12:07.704033 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:12:07.706009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:12:07.707326 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:12:07.715482 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 20:12:07.732099 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:12:07.765997 kernel: scsi host0: Virtio SCSI HBA Oct 8 20:12:07.813964 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 8 20:12:07.818355 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Oct 8 20:12:07.845362 kernel: ACPI: bus type USB registered Oct 8 20:12:07.847352 kernel: usbcore: registered new interface driver usbfs Oct 8 20:12:07.847401 kernel: usbcore: registered new interface driver hub Oct 8 20:12:07.847413 kernel: usbcore: registered new device driver usb Oct 8 20:12:07.854080 kernel: sr 0:0:0:0: Power-on or device reset occurred Oct 8 20:12:07.854809 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:12:07.856717 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Oct 8 20:12:07.856894 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 8 20:12:07.855955 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:07.859037 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Oct 8 20:12:07.857782 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:12:07.860612 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:12:07.860778 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:07.862083 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:07.869793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:07.889193 kernel: sd 0:0:0:1: Power-on or device reset occurred Oct 8 20:12:07.891444 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Oct 8 20:12:07.891738 kernel: sd 0:0:0:1: [sda] Write Protect is off Oct 8 20:12:07.891942 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Oct 8 20:12:07.893666 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 8 20:12:07.895040 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:07.899492 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 20:12:07.899528 kernel: GPT:17805311 != 80003071 Oct 8 20:12:07.902956 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 20:12:07.902997 kernel: GPT:17805311 != 80003071 Oct 8 20:12:07.903010 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 20:12:07.903021 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:12:07.903586 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Oct 8 20:12:07.913810 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:12:07.914028 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Oct 8 20:12:07.914117 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 8 20:12:07.912649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 20:12:07.916958 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Oct 8 20:12:07.917109 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Oct 8 20:12:07.917193 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Oct 8 20:12:07.919945 kernel: hub 1-0:1.0: USB hub found Oct 8 20:12:07.920134 kernel: hub 1-0:1.0: 4 ports detected Oct 8 20:12:07.920270 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 8 20:12:07.922389 kernel: hub 2-0:1.0: USB hub found Oct 8 20:12:07.923446 kernel: hub 2-0:1.0: 4 ports detected Oct 8 20:12:07.932270 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:07.955348 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (512) Oct 8 20:12:07.955397 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (504) Oct 8 20:12:07.958443 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Oct 8 20:12:07.973109 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Oct 8 20:12:07.978838 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:12:07.984003 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Oct 8 20:12:07.984800 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Oct 8 20:12:07.994553 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 20:12:08.002119 disk-uuid[576]: Primary Header is updated. Oct 8 20:12:08.002119 disk-uuid[576]: Secondary Entries is updated. Oct 8 20:12:08.002119 disk-uuid[576]: Secondary Header is updated. Oct 8 20:12:08.007339 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:12:08.157342 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 8 20:12:08.300855 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Oct 8 20:12:08.300915 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Oct 8 20:12:08.301076 kernel: usbcore: registered new interface driver usbhid Oct 8 20:12:08.301097 kernel: usbhid: USB HID core driver Oct 8 20:12:08.400358 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Oct 8 20:12:08.531359 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Oct 8 20:12:08.585365 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Oct 8 20:12:09.022350 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 8 20:12:09.024832 disk-uuid[577]: The operation has completed successfully. Oct 8 20:12:09.077918 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 20:12:09.078015 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 20:12:09.098532 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 20:12:09.102558 sh[594]: Success Oct 8 20:12:09.118340 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 20:12:09.172291 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 20:12:09.182470 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 20:12:09.184275 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 20:12:09.212843 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 20:12:09.212899 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:12:09.214183 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 20:12:09.214212 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 20:12:09.215904 kernel: BTRFS info (device dm-0): using free space tree Oct 8 20:12:09.222360 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 20:12:09.224577 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 20:12:09.225236 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 20:12:09.234518 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 20:12:09.237381 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 20:12:09.255430 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 20:12:09.255497 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:12:09.255517 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:12:09.260374 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:12:09.260448 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:12:09.271393 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 20:12:09.271187 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 20:12:09.279417 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 20:12:09.285489 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 20:12:09.380712 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:12:09.386509 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:12:09.414234 systemd-networkd[782]: lo: Link UP Oct 8 20:12:09.414246 systemd-networkd[782]: lo: Gained carrier Oct 8 20:12:09.415863 systemd-networkd[782]: Enumeration completed Oct 8 20:12:09.415964 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:12:09.417258 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:09.417261 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:09.418026 systemd[1]: Reached target network.target - Network. Oct 8 20:12:09.419150 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:09.419153 systemd-networkd[782]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:09.420106 systemd-networkd[782]: eth0: Link UP Oct 8 20:12:09.420109 systemd-networkd[782]: eth0: Gained carrier Oct 8 20:12:09.420116 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:09.430758 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:12:09.429078 ignition[682]: Ignition 2.18.0 Oct 8 20:12:09.430860 systemd-networkd[782]: eth1: Link UP Oct 8 20:12:09.429086 ignition[682]: Stage: fetch-offline Oct 8 20:12:09.430863 systemd-networkd[782]: eth1: Gained carrier Oct 8 20:12:09.429124 ignition[682]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:09.430872 systemd-networkd[782]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:09.429132 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:09.429221 ignition[682]: parsed url from cmdline: "" Oct 8 20:12:09.429223 ignition[682]: no config URL provided Oct 8 20:12:09.429228 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:12:09.429234 ignition[682]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:12:09.429238 ignition[682]: failed to fetch config: resource requires networking Oct 8 20:12:09.429417 ignition[682]: Ignition finished successfully Oct 8 20:12:09.441505 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 20:12:09.452898 ignition[787]: Ignition 2.18.0 Oct 8 20:12:09.452909 ignition[787]: Stage: fetch Oct 8 20:12:09.453101 ignition[787]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:09.453113 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:09.453205 ignition[787]: parsed url from cmdline: "" Oct 8 20:12:09.453208 ignition[787]: no config URL provided Oct 8 20:12:09.453213 ignition[787]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 20:12:09.453221 ignition[787]: no config at "/usr/lib/ignition/user.ign" Oct 8 20:12:09.453239 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Oct 8 20:12:09.454029 ignition[787]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 8 20:12:09.459366 systemd-networkd[782]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:12:09.546420 systemd-networkd[782]: eth0: DHCPv4 address 49.13.72.235/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:12:09.654235 ignition[787]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Oct 8 20:12:09.660765 ignition[787]: GET result: OK Oct 8 20:12:09.660963 ignition[787]: parsing config with SHA512: 657ac0396d3588e2257a2599baec977a385fbd0c13001dc3d46bcfd1f1f89633243cbffcb799d3b82f23add864058548be5d764be6a84e441acf381f563c13bd Oct 8 20:12:09.667501 unknown[787]: fetched base config from "system" Oct 8 20:12:09.667515 unknown[787]: fetched base config from "system" Oct 8 20:12:09.668002 ignition[787]: fetch: fetch complete Oct 8 20:12:09.667522 unknown[787]: fetched user config from "hetzner" Oct 8 20:12:09.668007 ignition[787]: fetch: fetch passed Oct 8 20:12:09.670624 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 20:12:09.668056 ignition[787]: Ignition finished successfully Oct 8 20:12:09.677452 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 20:12:09.692635 ignition[795]: Ignition 2.18.0 Oct 8 20:12:09.692654 ignition[795]: Stage: kargs Oct 8 20:12:09.692979 ignition[795]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:09.693000 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:09.697897 ignition[795]: kargs: kargs passed Oct 8 20:12:09.698384 ignition[795]: Ignition finished successfully Oct 8 20:12:09.700670 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 20:12:09.711544 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 20:12:09.724248 ignition[802]: Ignition 2.18.0 Oct 8 20:12:09.724261 ignition[802]: Stage: disks Oct 8 20:12:09.724435 ignition[802]: no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:09.726372 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 20:12:09.724444 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:09.727437 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 20:12:09.725323 ignition[802]: disks: disks passed Oct 8 20:12:09.728583 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 20:12:09.725371 ignition[802]: Ignition finished successfully Oct 8 20:12:09.729560 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:12:09.730092 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:12:09.730666 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:12:09.738528 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 20:12:09.757151 systemd-fsck[811]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 8 20:12:09.761860 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 20:12:09.768644 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 20:12:09.823353 kernel: EXT4-fs (sda9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 20:12:09.823523 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 20:12:09.825129 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 20:12:09.831536 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:12:09.834425 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 20:12:09.840519 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Oct 8 20:12:09.845386 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (819) Oct 8 20:12:09.846243 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 20:12:09.851249 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 20:12:09.851286 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:12:09.851436 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:12:09.846288 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:12:09.852046 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 20:12:09.858816 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 20:12:09.864206 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:12:09.864230 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:12:09.871927 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:12:09.921674 initrd-setup-root[846]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 20:12:09.927150 initrd-setup-root[853]: cut: /sysroot/etc/group: No such file or directory Oct 8 20:12:09.928993 coreos-metadata[821]: Oct 08 20:12:09.928 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Oct 8 20:12:09.931196 coreos-metadata[821]: Oct 08 20:12:09.931 INFO Fetch successful Oct 8 20:12:09.931196 coreos-metadata[821]: Oct 08 20:12:09.931 INFO wrote hostname ci-3975-2-2-1-c965454201 to /sysroot/etc/hostname Oct 8 20:12:09.934435 initrd-setup-root[860]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 20:12:09.933749 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:12:09.939593 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 20:12:10.044731 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 20:12:10.049441 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 20:12:10.052647 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 20:12:10.064366 kernel: BTRFS info (device sda6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 20:12:10.088809 ignition[936]: INFO : Ignition 2.18.0 Oct 8 20:12:10.088809 ignition[936]: INFO : Stage: mount Oct 8 20:12:10.089804 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:10.089804 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:10.091594 ignition[936]: INFO : mount: mount passed Oct 8 20:12:10.093446 ignition[936]: INFO : Ignition finished successfully Oct 8 20:12:10.093418 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 20:12:10.095976 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 20:12:10.101521 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 20:12:10.212271 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 20:12:10.228693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 20:12:10.240862 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (948) Oct 8 20:12:10.240936 kernel: BTRFS info (device sda6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 20:12:10.240954 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Oct 8 20:12:10.241417 kernel: BTRFS info (device sda6): using free space tree Oct 8 20:12:10.245340 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 8 20:12:10.245406 kernel: BTRFS info (device sda6): auto enabling async discard Oct 8 20:12:10.247843 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 20:12:10.269182 ignition[965]: INFO : Ignition 2.18.0 Oct 8 20:12:10.269182 ignition[965]: INFO : Stage: files Oct 8 20:12:10.270294 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:10.270294 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:10.270294 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Oct 8 20:12:10.273253 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 20:12:10.273253 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 20:12:10.275270 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 20:12:10.276269 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 20:12:10.277357 unknown[965]: wrote ssh authorized keys file for user: core Oct 8 20:12:10.278180 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 20:12:10.280101 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:12:10.280101 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 20:12:10.856693 systemd-networkd[782]: eth0: Gained IPv6LL Oct 8 20:12:11.368658 systemd-networkd[782]: eth1: Gained IPv6LL Oct 8 20:12:15.334043 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 20:12:15.525372 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:12:15.527210 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 20:12:16.130379 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 20:12:16.410930 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 20:12:16.410930 ignition[965]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 20:12:16.413527 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:12:16.413527 ignition[965]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 20:12:16.413527 ignition[965]: INFO : files: files passed Oct 8 20:12:16.413527 ignition[965]: INFO : Ignition finished successfully Oct 8 20:12:16.414676 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 20:12:16.424518 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 20:12:16.427143 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 20:12:16.432538 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 20:12:16.432669 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 20:12:16.443385 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:12:16.443385 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:12:16.445838 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 20:12:16.445920 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:12:16.448361 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 20:12:16.459528 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 20:12:16.494158 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 20:12:16.495451 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 20:12:16.497259 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 20:12:16.498291 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 20:12:16.499698 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 20:12:16.506520 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 20:12:16.521020 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:12:16.527554 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 20:12:16.541711 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:12:16.542834 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:12:16.544506 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 20:12:16.546139 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 20:12:16.546262 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 20:12:16.547964 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 20:12:16.548637 systemd[1]: Stopped target basic.target - Basic System. Oct 8 20:12:16.549609 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 20:12:16.550551 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 20:12:16.551620 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 20:12:16.552779 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 20:12:16.553741 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 20:12:16.555396 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 20:12:16.555987 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 20:12:16.557008 systemd[1]: Stopped target swap.target - Swaps. Oct 8 20:12:16.557853 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 20:12:16.557965 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 20:12:16.559183 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:12:16.559809 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:12:16.560885 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 20:12:16.561332 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:12:16.561958 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 20:12:16.562061 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 20:12:16.563506 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 20:12:16.563608 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 20:12:16.565000 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 20:12:16.565086 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 20:12:16.565889 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Oct 8 20:12:16.565981 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Oct 8 20:12:16.576545 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 20:12:16.577032 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 20:12:16.577151 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:12:16.580890 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 20:12:16.582110 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 20:12:16.582662 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:12:16.587526 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 20:12:16.587654 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 20:12:16.597797 ignition[1018]: INFO : Ignition 2.18.0 Oct 8 20:12:16.600372 ignition[1018]: INFO : Stage: umount Oct 8 20:12:16.600372 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 20:12:16.600372 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Oct 8 20:12:16.598131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 20:12:16.602507 ignition[1018]: INFO : umount: umount passed Oct 8 20:12:16.602507 ignition[1018]: INFO : Ignition finished successfully Oct 8 20:12:16.598222 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 20:12:16.604285 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 20:12:16.604403 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 20:12:16.605743 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 20:12:16.605835 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 20:12:16.606952 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 20:12:16.606996 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 20:12:16.608266 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 20:12:16.608307 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 20:12:16.610843 systemd[1]: Stopped target network.target - Network. Oct 8 20:12:16.613026 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 20:12:16.613082 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 20:12:16.615171 systemd[1]: Stopped target paths.target - Path Units. Oct 8 20:12:16.616964 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 20:12:16.621428 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:12:16.622377 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 20:12:16.622928 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 20:12:16.624549 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 20:12:16.624599 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 20:12:16.625589 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 20:12:16.625627 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 20:12:16.627615 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 20:12:16.627668 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 20:12:16.629195 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 20:12:16.629235 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 20:12:16.630596 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 20:12:16.632947 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 20:12:16.637470 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 20:12:16.638362 systemd-networkd[782]: eth0: DHCPv6 lease lost Oct 8 20:12:16.641777 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 20:12:16.641908 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 20:12:16.643392 systemd-networkd[782]: eth1: DHCPv6 lease lost Oct 8 20:12:16.645658 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 20:12:16.645780 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 20:12:16.648807 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 20:12:16.649032 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:12:16.656833 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 20:12:16.657438 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 20:12:16.657497 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 20:12:16.658682 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 20:12:16.658722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:16.659287 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 20:12:16.659337 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 20:12:16.661368 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 20:12:16.661417 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 20:12:16.663921 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:12:16.665267 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 20:12:16.665463 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 20:12:16.671190 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 20:12:16.671268 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 20:12:16.674484 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 20:12:16.674624 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:12:16.675555 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 20:12:16.675590 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 20:12:16.676241 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 20:12:16.676269 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:12:16.679012 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 20:12:16.679058 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 20:12:16.680559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 20:12:16.680623 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 20:12:16.681958 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 20:12:16.682000 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 20:12:16.688484 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 20:12:16.689064 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 20:12:16.689119 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:12:16.690191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:12:16.690230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:16.692530 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 20:12:16.692627 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 20:12:16.700149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 20:12:16.700276 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 20:12:16.701785 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 20:12:16.705527 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 20:12:16.723600 systemd[1]: Switching root. Oct 8 20:12:16.749816 systemd-journald[236]: Journal stopped Oct 8 20:12:17.620916 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Oct 8 20:12:17.620998 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 20:12:17.621012 kernel: SELinux: policy capability open_perms=1 Oct 8 20:12:17.621021 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 20:12:17.621035 kernel: SELinux: policy capability always_check_network=0 Oct 8 20:12:17.621044 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 20:12:17.621054 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 20:12:17.621070 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 20:12:17.621080 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 20:12:17.621089 kernel: audit: type=1403 audit(1728418336.882:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 20:12:17.621099 systemd[1]: Successfully loaded SELinux policy in 38.014ms. Oct 8 20:12:17.621116 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.158ms. Oct 8 20:12:17.621127 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 20:12:17.621139 systemd[1]: Detected virtualization kvm. Oct 8 20:12:17.621150 systemd[1]: Detected architecture arm64. Oct 8 20:12:17.621160 systemd[1]: Detected first boot. Oct 8 20:12:17.621170 systemd[1]: Hostname set to . Oct 8 20:12:17.621184 systemd[1]: Initializing machine ID from VM UUID. Oct 8 20:12:17.621194 zram_generator::config[1061]: No configuration found. Oct 8 20:12:17.621209 systemd[1]: Populated /etc with preset unit settings. Oct 8 20:12:17.621219 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 20:12:17.621231 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 20:12:17.621241 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 20:12:17.621252 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 20:12:17.621263 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 20:12:17.621272 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 20:12:17.621283 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 20:12:17.621293 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 20:12:17.621306 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 20:12:17.621335 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 20:12:17.621346 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 20:12:17.621356 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 20:12:17.621367 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 20:12:17.621377 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 20:12:17.621388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 20:12:17.621398 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 20:12:17.621409 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 20:12:17.621419 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 8 20:12:17.621430 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 20:12:17.621440 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 20:12:17.621451 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 20:12:17.621462 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 20:12:17.621472 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 20:12:17.621483 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 20:12:17.621494 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 20:12:17.621505 systemd[1]: Reached target slices.target - Slice Units. Oct 8 20:12:17.621515 systemd[1]: Reached target swap.target - Swaps. Oct 8 20:12:17.621526 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 20:12:17.621536 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 20:12:17.621548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 20:12:17.621559 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 20:12:17.621569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 20:12:17.621579 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 20:12:17.621590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 20:12:17.621602 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 20:12:17.621615 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 20:12:17.621627 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 20:12:17.621637 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 20:12:17.621648 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 20:12:17.621658 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 20:12:17.621668 systemd[1]: Reached target machines.target - Containers. Oct 8 20:12:17.621678 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 20:12:17.621691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:12:17.621701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 20:12:17.621711 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 20:12:17.621722 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:12:17.621732 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:12:17.621746 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:12:17.621758 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 20:12:17.621769 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:12:17.621781 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 20:12:17.621795 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 20:12:17.621809 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 20:12:17.621820 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 20:12:17.621830 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 20:12:17.621840 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 20:12:17.621852 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 20:12:17.621862 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 20:12:17.621873 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 20:12:17.621883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 20:12:17.621894 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 20:12:17.621905 systemd[1]: Stopped verity-setup.service. Oct 8 20:12:17.621915 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 20:12:17.621925 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 20:12:17.621935 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 20:12:17.621947 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 20:12:17.621958 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 20:12:17.621968 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 20:12:17.621982 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 20:12:17.621995 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 20:12:17.622009 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 20:12:17.622022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:12:17.622032 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:12:17.622043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:12:17.622053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:12:17.622063 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 20:12:17.622074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 20:12:17.622086 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 20:12:17.622097 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 20:12:17.622107 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 20:12:17.622118 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 20:12:17.622128 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 20:12:17.622140 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 20:12:17.622150 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 20:12:17.622191 systemd-journald[1127]: Collecting audit messages is disabled. Oct 8 20:12:17.622214 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 20:12:17.622224 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:12:17.622234 kernel: loop: module loaded Oct 8 20:12:17.622245 systemd-journald[1127]: Journal started Oct 8 20:12:17.622268 systemd-journald[1127]: Runtime Journal (/run/log/journal/29ad8f7067db4a72bfe72a98776a8a4b) is 8.0M, max 76.5M, 68.5M free. Oct 8 20:12:17.352930 systemd[1]: Queued start job for default target multi-user.target. Oct 8 20:12:17.374945 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Oct 8 20:12:17.375503 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 20:12:17.626450 kernel: fuse: init (API version 7.39) Oct 8 20:12:17.628486 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 20:12:17.628534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:12:17.640028 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 20:12:17.645417 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 20:12:17.650384 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 20:12:17.656330 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 20:12:17.656265 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 20:12:17.656454 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 20:12:17.657223 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:12:17.657353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:12:17.658039 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 20:12:17.659121 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 20:12:17.667211 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 20:12:17.677384 kernel: loop0: detected capacity change from 0 to 194512 Oct 8 20:12:17.678359 kernel: block loop0: the capability attribute has been deprecated. Oct 8 20:12:17.684378 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 20:12:17.700336 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 20:12:17.694446 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 20:12:17.712529 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 20:12:17.720727 kernel: ACPI: bus type drm_connector registered Oct 8 20:12:17.720139 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 20:12:17.721149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:12:17.723691 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 20:12:17.727933 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 20:12:17.739585 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 20:12:17.740393 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:12:17.740528 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:12:17.745276 systemd-journald[1127]: Time spent on flushing to /var/log/journal/29ad8f7067db4a72bfe72a98776a8a4b is 82.302ms for 1128 entries. Oct 8 20:12:17.745276 systemd-journald[1127]: System Journal (/var/log/journal/29ad8f7067db4a72bfe72a98776a8a4b) is 8.0M, max 584.8M, 576.8M free. Oct 8 20:12:17.841466 systemd-journald[1127]: Received client request to flush runtime journal. Oct 8 20:12:17.841518 kernel: loop1: detected capacity change from 0 to 59688 Oct 8 20:12:17.841532 kernel: loop2: detected capacity change from 0 to 113672 Oct 8 20:12:17.841544 kernel: loop3: detected capacity change from 0 to 8 Oct 8 20:12:17.778802 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 20:12:17.785386 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 20:12:17.794525 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 20:12:17.798568 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 20:12:17.803428 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 20:12:17.805689 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 20:12:17.813521 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 20:12:17.827821 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 20:12:17.844852 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 20:12:17.859043 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Oct 8 20:12:17.859059 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Oct 8 20:12:17.870485 kernel: loop4: detected capacity change from 0 to 194512 Oct 8 20:12:17.876508 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 20:12:17.891394 kernel: loop5: detected capacity change from 0 to 59688 Oct 8 20:12:17.900611 kernel: loop6: detected capacity change from 0 to 113672 Oct 8 20:12:17.920368 kernel: loop7: detected capacity change from 0 to 8 Oct 8 20:12:17.919277 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Oct 8 20:12:17.921347 (sd-merge)[1199]: Merged extensions into '/usr'. Oct 8 20:12:17.936169 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 20:12:17.936384 systemd[1]: Reloading... Oct 8 20:12:18.075010 zram_generator::config[1224]: No configuration found. Oct 8 20:12:18.202304 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 20:12:18.209990 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:12:18.256329 systemd[1]: Reloading finished in 319 ms. Oct 8 20:12:18.307374 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 20:12:18.308769 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 20:12:18.319910 systemd[1]: Starting ensure-sysext.service... Oct 8 20:12:18.329566 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 20:12:18.341580 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Oct 8 20:12:18.341596 systemd[1]: Reloading... Oct 8 20:12:18.369395 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 20:12:18.369912 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 20:12:18.372586 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 20:12:18.372842 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Oct 8 20:12:18.372898 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Oct 8 20:12:18.378828 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:12:18.378840 systemd-tmpfiles[1262]: Skipping /boot Oct 8 20:12:18.389678 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 20:12:18.389692 systemd-tmpfiles[1262]: Skipping /boot Oct 8 20:12:18.421348 zram_generator::config[1287]: No configuration found. Oct 8 20:12:18.522028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:12:18.568029 systemd[1]: Reloading finished in 226 ms. Oct 8 20:12:18.584273 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 20:12:18.591215 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 20:12:18.618509 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:12:18.623591 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 20:12:18.635496 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 20:12:18.640491 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 20:12:18.657163 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 20:12:18.662521 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 20:12:18.668917 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:12:18.678115 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:12:18.682418 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:12:18.686253 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:12:18.686898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:12:18.699614 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 20:12:18.701130 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 20:12:18.702279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:12:18.704369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:12:18.706477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:12:18.706601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:12:18.713510 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:12:18.713896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:12:18.721360 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Oct 8 20:12:18.726412 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:12:18.726606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:12:18.733382 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 20:12:18.741687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 20:12:18.759261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 20:12:18.767608 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 20:12:18.770360 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 20:12:18.773355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 20:12:18.780971 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 20:12:18.781645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 20:12:18.782473 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 20:12:18.784216 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 20:12:18.788412 systemd[1]: Finished ensure-sysext.service. Oct 8 20:12:18.789590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 20:12:18.789757 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 20:12:18.798101 augenrules[1362]: No rules Oct 8 20:12:18.800533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 20:12:18.801698 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 20:12:18.803696 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:12:18.804559 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 20:12:18.809684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 20:12:18.809862 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 20:12:18.810775 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 20:12:18.811393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 20:12:18.818675 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 20:12:18.818819 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 20:12:18.824517 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 20:12:18.825629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 20:12:18.825682 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 20:12:18.833519 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 8 20:12:18.930465 systemd-resolved[1334]: Positive Trust Anchors: Oct 8 20:12:18.936359 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 20:12:18.936403 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 20:12:18.947237 systemd-resolved[1334]: Using system hostname 'ci-3975-2-2-1-c965454201'. Oct 8 20:12:18.951590 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 20:12:18.952284 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 20:12:18.953628 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 8 20:12:18.954289 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 20:12:18.955096 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 8 20:12:18.987146 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1398) Oct 8 20:12:18.989545 systemd-networkd[1384]: lo: Link UP Oct 8 20:12:18.989553 systemd-networkd[1384]: lo: Gained carrier Oct 8 20:12:18.991082 systemd-networkd[1384]: Enumeration completed Oct 8 20:12:18.991187 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 20:12:18.991901 systemd[1]: Reached target network.target - Network. Oct 8 20:12:18.994270 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:18.994395 systemd-networkd[1384]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:18.995178 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:18.995276 systemd-networkd[1384]: eth1: Link UP Oct 8 20:12:18.995389 systemd-networkd[1384]: eth1: Gained carrier Oct 8 20:12:18.995442 systemd-networkd[1384]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:18.999109 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 20:12:19.022518 systemd-networkd[1384]: eth1: DHCPv4 address 10.0.0.3/32, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 8 20:12:19.023278 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Oct 8 20:12:19.059821 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:19.060262 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 20:12:19.061419 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Oct 8 20:12:19.061593 systemd-networkd[1384]: eth0: Link UP Oct 8 20:12:19.061675 systemd-networkd[1384]: eth0: Gained carrier Oct 8 20:12:19.061796 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 20:12:19.066125 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Oct 8 20:12:19.068369 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1389) Oct 8 20:12:19.086361 kernel: mousedev: PS/2 mouse device common for all mice Oct 8 20:12:19.144875 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Oct 8 20:12:19.152139 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 20:12:19.174873 systemd-networkd[1384]: eth0: DHCPv4 address 49.13.72.235/32, gateway 172.31.1.1 acquired from 172.31.1.1 Oct 8 20:12:19.175602 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Oct 8 20:12:19.180044 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Oct 8 20:12:19.180137 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Oct 8 20:12:19.180155 kernel: [drm] features: -context_init Oct 8 20:12:19.184655 kernel: [drm] number of scanouts: 1 Oct 8 20:12:19.184713 kernel: [drm] number of cap sets: 0 Oct 8 20:12:19.186780 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:19.190847 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Oct 8 20:12:19.189369 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 20:12:19.196343 kernel: Console: switching to colour frame buffer device 160x50 Oct 8 20:12:19.203242 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 20:12:19.203538 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Oct 8 20:12:19.203758 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:19.214557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 20:12:19.277184 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 20:12:19.339203 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 20:12:19.343695 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 20:12:19.370496 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:12:19.399808 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 20:12:19.403006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 20:12:19.404481 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 20:12:19.405710 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 20:12:19.407018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 20:12:19.408761 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 20:12:19.410114 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 20:12:19.411587 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 20:12:19.412572 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 20:12:19.412611 systemd[1]: Reached target paths.target - Path Units. Oct 8 20:12:19.413102 systemd[1]: Reached target timers.target - Timer Units. Oct 8 20:12:19.415603 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 20:12:19.419777 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 20:12:19.426369 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 20:12:19.428424 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 20:12:19.429590 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 20:12:19.430262 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 20:12:19.430835 systemd[1]: Reached target basic.target - Basic System. Oct 8 20:12:19.431478 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:12:19.431513 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 20:12:19.441565 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 20:12:19.446218 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 20:12:19.447621 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 20:12:19.450611 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 20:12:19.454678 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 20:12:19.458589 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 20:12:19.459108 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 20:12:19.466520 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 20:12:19.474377 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 20:12:19.483795 jq[1447]: false Oct 8 20:12:19.487760 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 20:12:19.491533 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 20:12:19.497514 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 20:12:19.498915 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 20:12:19.499862 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 20:12:19.502487 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 20:12:19.506565 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 20:12:19.507822 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 20:12:19.513803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 20:12:19.513957 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 20:12:19.517780 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 20:12:19.517944 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 20:12:19.523985 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 20:12:19.525026 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 20:12:19.535273 coreos-metadata[1445]: Oct 08 20:12:19.534 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Oct 8 20:12:19.538407 coreos-metadata[1445]: Oct 08 20:12:19.538 INFO Fetch successful Oct 8 20:12:19.539004 extend-filesystems[1448]: Found loop4 Oct 8 20:12:19.539004 extend-filesystems[1448]: Found loop5 Oct 8 20:12:19.539004 extend-filesystems[1448]: Found loop6 Oct 8 20:12:19.539004 extend-filesystems[1448]: Found loop7 Oct 8 20:12:19.539004 extend-filesystems[1448]: Found sda Oct 8 20:12:19.539004 extend-filesystems[1448]: Found sda1 Oct 8 20:12:19.539004 extend-filesystems[1448]: Found sda2 Oct 8 20:12:19.555387 extend-filesystems[1448]: Found sda3 Oct 8 20:12:19.555387 extend-filesystems[1448]: Found usr Oct 8 20:12:19.555387 extend-filesystems[1448]: Found sda4 Oct 8 20:12:19.555387 extend-filesystems[1448]: Found sda6 Oct 8 20:12:19.555387 extend-filesystems[1448]: Found sda7 Oct 8 20:12:19.555387 extend-filesystems[1448]: Found sda9 Oct 8 20:12:19.555387 extend-filesystems[1448]: Checking size of /dev/sda9 Oct 8 20:12:19.582490 coreos-metadata[1445]: Oct 08 20:12:19.539 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Oct 8 20:12:19.582490 coreos-metadata[1445]: Oct 08 20:12:19.539 INFO Fetch successful Oct 8 20:12:19.571256 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 20:12:19.571048 dbus-daemon[1446]: [system] SELinux support is enabled Oct 8 20:12:19.580048 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 20:12:19.580123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 20:12:19.580835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 20:12:19.592506 jq[1464]: true Oct 8 20:12:19.580852 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 20:12:19.596927 tar[1466]: linux-arm64/helm Oct 8 20:12:19.611516 jq[1479]: true Oct 8 20:12:19.611648 extend-filesystems[1448]: Resized partition /dev/sda9 Oct 8 20:12:19.626128 update_engine[1460]: I1008 20:12:19.623873 1460 main.cc:92] Flatcar Update Engine starting Oct 8 20:12:19.626681 (ntainerd)[1477]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 20:12:19.632808 extend-filesystems[1493]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 20:12:19.636994 systemd[1]: Started update-engine.service - Update Engine. Oct 8 20:12:19.643433 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Oct 8 20:12:19.642762 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 20:12:19.643526 update_engine[1460]: I1008 20:12:19.640097 1460 update_check_scheduler.cc:74] Next update check in 6m18s Oct 8 20:12:19.696524 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1394) Oct 8 20:12:19.707658 systemd-logind[1457]: New seat seat0. Oct 8 20:12:19.713472 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 20:12:19.715095 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 20:12:19.717234 systemd-logind[1457]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 20:12:19.717488 systemd-logind[1457]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Oct 8 20:12:19.718158 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 20:12:19.732031 bash[1510]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:12:19.732975 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 20:12:19.754727 systemd[1]: Starting sshkeys.service... Oct 8 20:12:19.793444 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 20:12:19.803574 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 20:12:19.826894 coreos-metadata[1517]: Oct 08 20:12:19.826 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Oct 8 20:12:19.830237 coreos-metadata[1517]: Oct 08 20:12:19.830 INFO Fetch successful Oct 8 20:12:19.831366 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Oct 8 20:12:19.849992 unknown[1517]: wrote ssh authorized keys file for user: core Oct 8 20:12:19.853597 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 8 20:12:19.853597 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Oct 8 20:12:19.853597 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Oct 8 20:12:19.866296 extend-filesystems[1448]: Resized filesystem in /dev/sda9 Oct 8 20:12:19.866296 extend-filesystems[1448]: Found sr0 Oct 8 20:12:19.864977 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 20:12:19.865178 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 20:12:19.881119 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Oct 8 20:12:19.881675 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 20:12:19.887805 systemd[1]: Finished sshkeys.service. Oct 8 20:12:19.908625 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 20:12:20.072546 systemd-networkd[1384]: eth1: Gained IPv6LL Oct 8 20:12:20.077469 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Oct 8 20:12:20.083009 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 20:12:20.085886 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 20:12:20.095659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:20.101636 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 20:12:20.174613 containerd[1477]: time="2024-10-08T20:12:20.174526360Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 20:12:20.179263 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 20:12:20.268390 containerd[1477]: time="2024-10-08T20:12:20.268337520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 20:12:20.268390 containerd[1477]: time="2024-10-08T20:12:20.268389480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:20.273000 containerd[1477]: time="2024-10-08T20:12:20.272946520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:20.273000 containerd[1477]: time="2024-10-08T20:12:20.272991560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:20.273244 containerd[1477]: time="2024-10-08T20:12:20.273216600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:20.273244 containerd[1477]: time="2024-10-08T20:12:20.273240720Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 20:12:20.275711 containerd[1477]: time="2024-10-08T20:12:20.275673840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:20.275821 containerd[1477]: time="2024-10-08T20:12:20.275799200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:20.275851 containerd[1477]: time="2024-10-08T20:12:20.275820320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:20.275906 containerd[1477]: time="2024-10-08T20:12:20.275889680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:20.278318 containerd[1477]: time="2024-10-08T20:12:20.276142640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:20.278318 containerd[1477]: time="2024-10-08T20:12:20.276168640Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 20:12:20.278318 containerd[1477]: time="2024-10-08T20:12:20.276181920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 20:12:20.278318 containerd[1477]: time="2024-10-08T20:12:20.276302360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 20:12:20.278318 containerd[1477]: time="2024-10-08T20:12:20.276334400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 20:12:20.278318 containerd[1477]: time="2024-10-08T20:12:20.276391040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 20:12:20.278318 containerd[1477]: time="2024-10-08T20:12:20.276406280Z" level=info msg="metadata content store policy set" policy=shared Oct 8 20:12:20.285124 containerd[1477]: time="2024-10-08T20:12:20.285079880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 20:12:20.285207 containerd[1477]: time="2024-10-08T20:12:20.285132400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 20:12:20.285207 containerd[1477]: time="2024-10-08T20:12:20.285148520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 20:12:20.285207 containerd[1477]: time="2024-10-08T20:12:20.285180320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 20:12:20.285207 containerd[1477]: time="2024-10-08T20:12:20.285194800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 20:12:20.285207 containerd[1477]: time="2024-10-08T20:12:20.285205960Z" level=info msg="NRI interface is disabled by configuration." Oct 8 20:12:20.285307 containerd[1477]: time="2024-10-08T20:12:20.285218880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 20:12:20.285403 containerd[1477]: time="2024-10-08T20:12:20.285383560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 20:12:20.285430 containerd[1477]: time="2024-10-08T20:12:20.285407760Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 20:12:20.285430 containerd[1477]: time="2024-10-08T20:12:20.285422160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 20:12:20.285473 containerd[1477]: time="2024-10-08T20:12:20.285436280Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 20:12:20.285473 containerd[1477]: time="2024-10-08T20:12:20.285450240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285473 containerd[1477]: time="2024-10-08T20:12:20.285467320Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285522 containerd[1477]: time="2024-10-08T20:12:20.285480160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285522 containerd[1477]: time="2024-10-08T20:12:20.285492600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285522 containerd[1477]: time="2024-10-08T20:12:20.285507560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285522 containerd[1477]: time="2024-10-08T20:12:20.285520640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285587 containerd[1477]: time="2024-10-08T20:12:20.285533560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285587 containerd[1477]: time="2024-10-08T20:12:20.285545360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 20:12:20.285659 containerd[1477]: time="2024-10-08T20:12:20.285639720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 20:12:20.285902 containerd[1477]: time="2024-10-08T20:12:20.285884800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 20:12:20.285975 containerd[1477]: time="2024-10-08T20:12:20.285915280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.285975 containerd[1477]: time="2024-10-08T20:12:20.285931160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 20:12:20.285975 containerd[1477]: time="2024-10-08T20:12:20.285952880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 20:12:20.286089 containerd[1477]: time="2024-10-08T20:12:20.286075680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286115 containerd[1477]: time="2024-10-08T20:12:20.286093200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286115 containerd[1477]: time="2024-10-08T20:12:20.286106200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286164 containerd[1477]: time="2024-10-08T20:12:20.286117520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286164 containerd[1477]: time="2024-10-08T20:12:20.286130320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286164 containerd[1477]: time="2024-10-08T20:12:20.286142840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286164 containerd[1477]: time="2024-10-08T20:12:20.286154160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286228 containerd[1477]: time="2024-10-08T20:12:20.286165640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.286228 containerd[1477]: time="2024-10-08T20:12:20.286179040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 20:12:20.289254 containerd[1477]: time="2024-10-08T20:12:20.288269240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.289298 containerd[1477]: time="2024-10-08T20:12:20.289264480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.289298 containerd[1477]: time="2024-10-08T20:12:20.289282760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.289356 containerd[1477]: time="2024-10-08T20:12:20.289340160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.289380 containerd[1477]: time="2024-10-08T20:12:20.289362320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.289399 containerd[1477]: time="2024-10-08T20:12:20.289381200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.290130 containerd[1477]: time="2024-10-08T20:12:20.290107320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.290157 containerd[1477]: time="2024-10-08T20:12:20.290132720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 20:12:20.290567 containerd[1477]: time="2024-10-08T20:12:20.290505560Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 20:12:20.290673 containerd[1477]: time="2024-10-08T20:12:20.290574080Z" level=info msg="Connect containerd service" Oct 8 20:12:20.292122 containerd[1477]: time="2024-10-08T20:12:20.290608800Z" level=info msg="using legacy CRI server" Oct 8 20:12:20.292164 containerd[1477]: time="2024-10-08T20:12:20.292122800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 20:12:20.292809 containerd[1477]: time="2024-10-08T20:12:20.292305920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 20:12:20.293995 containerd[1477]: time="2024-10-08T20:12:20.293827480Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:12:20.294038 containerd[1477]: time="2024-10-08T20:12:20.294011920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 20:12:20.295340 containerd[1477]: time="2024-10-08T20:12:20.294031080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 20:12:20.295374 containerd[1477]: time="2024-10-08T20:12:20.295336760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 20:12:20.295374 containerd[1477]: time="2024-10-08T20:12:20.295357200Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.294223360Z" level=info msg="Start subscribing containerd event" Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.295520960Z" level=info msg="Start recovering state" Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.295590880Z" level=info msg="Start event monitor" Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.295603160Z" level=info msg="Start snapshots syncer" Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.295612440Z" level=info msg="Start cni network conf syncer for default" Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.295619800Z" level=info msg="Start streaming server" Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.295956720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 20:12:20.298659 containerd[1477]: time="2024-10-08T20:12:20.295997440Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 20:12:20.296181 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 20:12:20.298954 containerd[1477]: time="2024-10-08T20:12:20.298933440Z" level=info msg="containerd successfully booted in 0.129225s" Oct 8 20:12:20.393682 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 20:12:20.415089 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 20:12:20.422622 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 20:12:20.431643 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 20:12:20.431817 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 20:12:20.442148 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 20:12:20.455268 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 20:12:20.463864 tar[1466]: linux-arm64/LICENSE Oct 8 20:12:20.465002 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 20:12:20.465898 tar[1466]: linux-arm64/README.md Oct 8 20:12:20.469672 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 8 20:12:20.470495 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 20:12:20.481374 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 20:12:20.712492 systemd-networkd[1384]: eth0: Gained IPv6LL Oct 8 20:12:20.713151 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Oct 8 20:12:20.755183 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 20:12:20.761778 systemd[1]: Started sshd@0-49.13.72.235:22-121.142.87.218:43210.service - OpenSSH per-connection server daemon (121.142.87.218:43210). Oct 8 20:12:20.882900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:20.884992 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 20:12:20.891494 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:20.891701 systemd[1]: Startup finished in 783ms (kernel) + 10.163s (initrd) + 4.046s (userspace) = 14.994s. Oct 8 20:12:21.507234 kubelet[1579]: E1008 20:12:21.507143 1579 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:21.509477 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:21.509711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:12:22.302696 sshd[1572]: Invalid user Daniel from 121.142.87.218 port 43210 Oct 8 20:12:22.593481 sshd[1572]: Received disconnect from 121.142.87.218 port 43210:11: Bye Bye [preauth] Oct 8 20:12:22.593481 sshd[1572]: Disconnected from invalid user Daniel 121.142.87.218 port 43210 [preauth] Oct 8 20:12:22.595981 systemd[1]: sshd@0-49.13.72.235:22-121.142.87.218:43210.service: Deactivated successfully. Oct 8 20:12:31.760144 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 20:12:31.767670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:31.865814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:31.870804 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:31.924936 kubelet[1601]: E1008 20:12:31.924873 1601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:31.928081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:31.928228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:12:42.130052 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 20:12:42.136648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:42.251577 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:42.251753 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:42.302232 kubelet[1617]: E1008 20:12:42.302129 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:42.304697 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:42.304828 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:12:50.996836 systemd-timesyncd[1387]: Contacted time server 78.47.118.0:123 (2.flatcar.pool.ntp.org). Oct 8 20:12:50.996951 systemd-timesyncd[1387]: Initial clock synchronization to Tue 2024-10-08 20:12:51.203141 UTC. Oct 8 20:12:52.381830 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 20:12:52.388589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:12:52.504962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:12:52.519073 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:12:52.573885 kubelet[1633]: E1008 20:12:52.573840 1633 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:12:52.576943 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:12:52.577141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:02.631140 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 8 20:13:02.644645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:02.762197 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:02.773774 (kubelet)[1650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:02.826141 kubelet[1650]: E1008 20:13:02.826067 1650 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:02.829918 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:02.830111 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:04.913414 update_engine[1460]: I1008 20:13:04.912684 1460 update_attempter.cc:509] Updating boot flags... Oct 8 20:13:04.961403 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1668) Oct 8 20:13:05.020354 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1667) Oct 8 20:13:05.062377 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1667) Oct 8 20:13:12.880135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 8 20:13:12.892035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:12.992435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:12.997001 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:13.048620 kubelet[1688]: E1008 20:13:13.048497 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:13.053206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:13.053666 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:23.130080 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 8 20:13:23.141657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:23.249119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:23.262180 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:23.319271 kubelet[1705]: E1008 20:13:23.319218 1705 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:23.322425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:23.322620 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:33.379903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 8 20:13:33.385634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:33.486040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:33.502081 (kubelet)[1721]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:33.548113 kubelet[1721]: E1008 20:13:33.547996 1721 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:33.552851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:33.553005 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:43.629995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 8 20:13:43.637718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:43.736482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:43.750685 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:43.810546 kubelet[1737]: E1008 20:13:43.810473 1737 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:43.815080 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:43.815264 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:53.880227 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 8 20:13:53.890582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:13:54.006561 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:13:54.008063 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:13:54.061222 kubelet[1753]: E1008 20:13:54.061144 1753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:13:54.065957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:13:54.066268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:13:55.930805 systemd[1]: Started sshd@1-49.13.72.235:22-61.147.204.98:34147.service - OpenSSH per-connection server daemon (61.147.204.98:34147). Oct 8 20:14:04.129801 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Oct 8 20:14:04.137618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:04.258722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:04.258961 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:04.316396 kubelet[1771]: E1008 20:14:04.316234 1771 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:04.318884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:04.319025 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:07.977195 systemd[1]: Started sshd@2-49.13.72.235:22-139.178.89.65:53792.service - OpenSSH per-connection server daemon (139.178.89.65:53792). Oct 8 20:14:08.953951 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 53792 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:14:08.956509 sshd[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:08.973793 systemd-logind[1457]: New session 1 of user core. Oct 8 20:14:08.975950 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 20:14:08.982677 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 20:14:08.995355 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 20:14:09.002671 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 20:14:09.006888 (systemd)[1785]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:09.131086 systemd[1785]: Queued start job for default target default.target. Oct 8 20:14:09.137652 systemd[1785]: Created slice app.slice - User Application Slice. Oct 8 20:14:09.137685 systemd[1785]: Reached target paths.target - Paths. Oct 8 20:14:09.137703 systemd[1785]: Reached target timers.target - Timers. Oct 8 20:14:09.139381 systemd[1785]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 20:14:09.169155 systemd[1785]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 20:14:09.169429 systemd[1785]: Reached target sockets.target - Sockets. Oct 8 20:14:09.169476 systemd[1785]: Reached target basic.target - Basic System. Oct 8 20:14:09.169595 systemd[1785]: Reached target default.target - Main User Target. Oct 8 20:14:09.169693 systemd[1785]: Startup finished in 155ms. Oct 8 20:14:09.169830 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 20:14:09.176524 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 20:14:09.875796 systemd[1]: Started sshd@3-49.13.72.235:22-139.178.89.65:53796.service - OpenSSH per-connection server daemon (139.178.89.65:53796). Oct 8 20:14:10.850608 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 53796 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:14:10.852870 sshd[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:10.861565 systemd-logind[1457]: New session 2 of user core. Oct 8 20:14:10.868755 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 20:14:11.534206 sshd[1796]: pam_unix(sshd:session): session closed for user core Oct 8 20:14:11.538199 systemd-logind[1457]: Session 2 logged out. Waiting for processes to exit. Oct 8 20:14:11.538337 systemd[1]: sshd@3-49.13.72.235:22-139.178.89.65:53796.service: Deactivated successfully. Oct 8 20:14:11.541413 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 20:14:11.544969 systemd-logind[1457]: Removed session 2. Oct 8 20:14:11.704730 systemd[1]: Started sshd@4-49.13.72.235:22-139.178.89.65:53808.service - OpenSSH per-connection server daemon (139.178.89.65:53808). Oct 8 20:14:12.684912 sshd[1803]: Accepted publickey for core from 139.178.89.65 port 53808 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:14:12.686782 sshd[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:12.691737 systemd-logind[1457]: New session 3 of user core. Oct 8 20:14:12.698513 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 20:14:13.360145 sshd[1803]: pam_unix(sshd:session): session closed for user core Oct 8 20:14:13.366525 systemd[1]: sshd@4-49.13.72.235:22-139.178.89.65:53808.service: Deactivated successfully. Oct 8 20:14:13.370403 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 20:14:13.371496 systemd-logind[1457]: Session 3 logged out. Waiting for processes to exit. Oct 8 20:14:13.372636 systemd-logind[1457]: Removed session 3. Oct 8 20:14:13.544721 systemd[1]: Started sshd@5-49.13.72.235:22-139.178.89.65:53816.service - OpenSSH per-connection server daemon (139.178.89.65:53816). Oct 8 20:14:14.366617 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Oct 8 20:14:14.371738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:14.493938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:14.498573 (kubelet)[1820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:14.536736 sshd[1810]: Accepted publickey for core from 139.178.89.65 port 53816 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:14:14.540257 sshd[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:14.547107 kubelet[1820]: E1008 20:14:14.546972 1820 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:14.550278 systemd-logind[1457]: New session 4 of user core. Oct 8 20:14:14.557658 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 20:14:14.558222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:14.558509 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:15.227964 sshd[1810]: pam_unix(sshd:session): session closed for user core Oct 8 20:14:15.233615 systemd[1]: sshd@5-49.13.72.235:22-139.178.89.65:53816.service: Deactivated successfully. Oct 8 20:14:15.236120 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 20:14:15.237096 systemd-logind[1457]: Session 4 logged out. Waiting for processes to exit. Oct 8 20:14:15.238198 systemd-logind[1457]: Removed session 4. Oct 8 20:14:15.402862 systemd[1]: Started sshd@6-49.13.72.235:22-139.178.89.65:60962.service - OpenSSH per-connection server daemon (139.178.89.65:60962). Oct 8 20:14:16.360680 sshd[1834]: Accepted publickey for core from 139.178.89.65 port 60962 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:14:16.362628 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:16.369630 systemd-logind[1457]: New session 5 of user core. Oct 8 20:14:16.376532 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 20:14:16.923428 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 20:14:16.923727 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 20:14:16.938390 sudo[1837]: pam_unix(sudo:session): session closed for user root Oct 8 20:14:17.094933 sshd[1834]: pam_unix(sshd:session): session closed for user core Oct 8 20:14:17.101308 systemd[1]: sshd@6-49.13.72.235:22-139.178.89.65:60962.service: Deactivated successfully. Oct 8 20:14:17.104111 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 20:14:17.107357 systemd-logind[1457]: Session 5 logged out. Waiting for processes to exit. Oct 8 20:14:17.109392 systemd-logind[1457]: Removed session 5. Oct 8 20:14:17.274703 systemd[1]: Started sshd@7-49.13.72.235:22-139.178.89.65:60978.service - OpenSSH per-connection server daemon (139.178.89.65:60978). Oct 8 20:14:18.239507 sshd[1842]: Accepted publickey for core from 139.178.89.65 port 60978 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:14:18.241704 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:18.249840 systemd-logind[1457]: New session 6 of user core. Oct 8 20:14:18.250671 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 20:14:18.753210 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 20:14:18.753902 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 20:14:18.758673 sudo[1846]: pam_unix(sudo:session): session closed for user root Oct 8 20:14:18.766062 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 20:14:18.766700 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 20:14:18.776772 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 20:14:18.791586 auditctl[1849]: No rules Oct 8 20:14:18.792288 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 20:14:18.792653 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 20:14:18.801910 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 20:14:18.831514 augenrules[1867]: No rules Oct 8 20:14:18.832251 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 20:14:18.833586 sudo[1845]: pam_unix(sudo:session): session closed for user root Oct 8 20:14:18.990471 sshd[1842]: pam_unix(sshd:session): session closed for user core Oct 8 20:14:18.995419 systemd[1]: sshd@7-49.13.72.235:22-139.178.89.65:60978.service: Deactivated successfully. Oct 8 20:14:18.997282 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 20:14:18.999807 systemd-logind[1457]: Session 6 logged out. Waiting for processes to exit. Oct 8 20:14:19.001820 systemd-logind[1457]: Removed session 6. Oct 8 20:14:19.159684 systemd[1]: Started sshd@8-49.13.72.235:22-139.178.89.65:60986.service - OpenSSH per-connection server daemon (139.178.89.65:60986). Oct 8 20:14:20.135476 sshd[1875]: Accepted publickey for core from 139.178.89.65 port 60986 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:14:20.137532 sshd[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:14:20.143406 systemd-logind[1457]: New session 7 of user core. Oct 8 20:14:20.154663 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 20:14:20.647559 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 20:14:20.647863 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 20:14:20.772688 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 20:14:20.772743 (dockerd)[1888]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 20:14:21.035360 dockerd[1888]: time="2024-10-08T20:14:21.035173755Z" level=info msg="Starting up" Oct 8 20:14:21.068345 dockerd[1888]: time="2024-10-08T20:14:21.068159646Z" level=info msg="Loading containers: start." Oct 8 20:14:21.187361 kernel: Initializing XFRM netlink socket Oct 8 20:14:21.280120 systemd-networkd[1384]: docker0: Link UP Oct 8 20:14:21.299451 dockerd[1888]: time="2024-10-08T20:14:21.299305648Z" level=info msg="Loading containers: done." Oct 8 20:14:21.366856 dockerd[1888]: time="2024-10-08T20:14:21.366273538Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 20:14:21.366856 dockerd[1888]: time="2024-10-08T20:14:21.366485744Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 20:14:21.366856 dockerd[1888]: time="2024-10-08T20:14:21.366621628Z" level=info msg="Daemon has completed initialization" Oct 8 20:14:21.394887 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 20:14:21.395655 dockerd[1888]: time="2024-10-08T20:14:21.394714141Z" level=info msg="API listen on /run/docker.sock" Oct 8 20:14:22.428810 containerd[1477]: time="2024-10-08T20:14:22.428764391Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 20:14:23.124455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803229387.mount: Deactivated successfully. Oct 8 20:14:24.358556 containerd[1477]: time="2024-10-08T20:14:24.358502360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:24.359552 containerd[1477]: time="2024-10-08T20:14:24.359521548Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286150" Oct 8 20:14:24.360356 containerd[1477]: time="2024-10-08T20:14:24.360286609Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:24.363080 containerd[1477]: time="2024-10-08T20:14:24.363030644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:24.364579 containerd[1477]: time="2024-10-08T20:14:24.364323800Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 1.935494126s" Oct 8 20:14:24.364579 containerd[1477]: time="2024-10-08T20:14:24.364357241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 20:14:24.387205 containerd[1477]: time="2024-10-08T20:14:24.387159945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 20:14:24.630165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Oct 8 20:14:24.637894 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:24.758002 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:24.778961 (kubelet)[2086]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:24.835391 kubelet[2086]: E1008 20:14:24.835333 2086 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:24.837939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:24.838084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:25.895847 containerd[1477]: time="2024-10-08T20:14:25.895787276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:25.897162 containerd[1477]: time="2024-10-08T20:14:25.896898786Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374224" Oct 8 20:14:25.898509 containerd[1477]: time="2024-10-08T20:14:25.898458949Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:25.902536 containerd[1477]: time="2024-10-08T20:14:25.902488218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:25.904112 containerd[1477]: time="2024-10-08T20:14:25.904059741Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.516851634s" Oct 8 20:14:25.908335 containerd[1477]: time="2024-10-08T20:14:25.904228665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 20:14:25.927660 containerd[1477]: time="2024-10-08T20:14:25.927611780Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 20:14:27.118370 containerd[1477]: time="2024-10-08T20:14:27.118306492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:27.119958 containerd[1477]: time="2024-10-08T20:14:27.119925935Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751237" Oct 8 20:14:27.120220 containerd[1477]: time="2024-10-08T20:14:27.120191742Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:27.122946 containerd[1477]: time="2024-10-08T20:14:27.122907175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:27.124027 containerd[1477]: time="2024-10-08T20:14:27.123981204Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.196179018s" Oct 8 20:14:27.124027 containerd[1477]: time="2024-10-08T20:14:27.124017125Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 20:14:27.143983 containerd[1477]: time="2024-10-08T20:14:27.143945617Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 20:14:28.389100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396334162.mount: Deactivated successfully. Oct 8 20:14:28.675307 containerd[1477]: time="2024-10-08T20:14:28.675029833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:28.676340 containerd[1477]: time="2024-10-08T20:14:28.676166674Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254064" Oct 8 20:14:28.677334 containerd[1477]: time="2024-10-08T20:14:28.677272833Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:28.680256 containerd[1477]: time="2024-10-08T20:14:28.680178537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:28.681795 containerd[1477]: time="2024-10-08T20:14:28.681743152Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 1.537758375s" Oct 8 20:14:28.681871 containerd[1477]: time="2024-10-08T20:14:28.681790034Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 20:14:28.705260 containerd[1477]: time="2024-10-08T20:14:28.705191948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 20:14:29.349948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount475157714.mount: Deactivated successfully. Oct 8 20:14:29.895364 containerd[1477]: time="2024-10-08T20:14:29.895300227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:29.897199 containerd[1477]: time="2024-10-08T20:14:29.897168773Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485461" Oct 8 20:14:29.897991 containerd[1477]: time="2024-10-08T20:14:29.897951561Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:29.900787 containerd[1477]: time="2024-10-08T20:14:29.900730019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:29.902104 containerd[1477]: time="2024-10-08T20:14:29.901978743Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.196746833s" Oct 8 20:14:29.902104 containerd[1477]: time="2024-10-08T20:14:29.902010384Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 20:14:29.922797 containerd[1477]: time="2024-10-08T20:14:29.922758156Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 20:14:30.510661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424369550.mount: Deactivated successfully. Oct 8 20:14:30.516885 containerd[1477]: time="2024-10-08T20:14:30.516818836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:30.518442 containerd[1477]: time="2024-10-08T20:14:30.518351930Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:30.518442 containerd[1477]: time="2024-10-08T20:14:30.518418652Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268841" Oct 8 20:14:30.521332 containerd[1477]: time="2024-10-08T20:14:30.521244031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:30.522757 containerd[1477]: time="2024-10-08T20:14:30.522622639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 599.655276ms" Oct 8 20:14:30.522757 containerd[1477]: time="2024-10-08T20:14:30.522662281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 20:14:30.545873 containerd[1477]: time="2024-10-08T20:14:30.545827571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 20:14:31.160683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999877704.mount: Deactivated successfully. Oct 8 20:14:32.543459 containerd[1477]: time="2024-10-08T20:14:32.542324900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:32.544789 containerd[1477]: time="2024-10-08T20:14:32.544760223Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200866" Oct 8 20:14:32.546375 containerd[1477]: time="2024-10-08T20:14:32.546342918Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:32.550002 containerd[1477]: time="2024-10-08T20:14:32.549969202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:14:32.551631 containerd[1477]: time="2024-10-08T20:14:32.551601258Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.005724086s" Oct 8 20:14:32.551743 containerd[1477]: time="2024-10-08T20:14:32.551726503Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 20:14:34.879922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Oct 8 20:14:34.889393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:35.010515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:35.015523 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 20:14:35.067325 kubelet[2290]: E1008 20:14:35.067256 2290 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 20:14:35.069814 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 20:14:35.069955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 20:14:35.577539 systemd[1]: Started sshd@9-49.13.72.235:22-121.142.87.218:39212.service - OpenSSH per-connection server daemon (121.142.87.218:39212). Oct 8 20:14:36.843877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:36.857803 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:36.894054 systemd[1]: Reloading requested from client PID 2308 ('systemctl') (unit session-7.scope)... Oct 8 20:14:36.894123 systemd[1]: Reloading... Oct 8 20:14:36.999425 zram_generator::config[2349]: No configuration found. Oct 8 20:14:37.103708 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:14:37.172141 systemd[1]: Reloading finished in 277 ms. Oct 8 20:14:37.225058 sshd[2300]: Invalid user airtech from 121.142.87.218 port 39212 Oct 8 20:14:37.232959 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 20:14:37.233035 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 20:14:37.233584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:37.236753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:37.366498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:37.382693 (kubelet)[2398]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:14:37.437426 kubelet[2398]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:14:37.437426 kubelet[2398]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:14:37.437426 kubelet[2398]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:14:37.437838 kubelet[2398]: I1008 20:14:37.437737 2398 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:14:37.554379 sshd[2300]: Received disconnect from 121.142.87.218 port 39212:11: Bye Bye [preauth] Oct 8 20:14:37.554379 sshd[2300]: Disconnected from invalid user airtech 121.142.87.218 port 39212 [preauth] Oct 8 20:14:37.558213 systemd[1]: sshd@9-49.13.72.235:22-121.142.87.218:39212.service: Deactivated successfully. Oct 8 20:14:38.420353 kubelet[2398]: I1008 20:14:38.418619 2398 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:14:38.420353 kubelet[2398]: I1008 20:14:38.418671 2398 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:14:38.420353 kubelet[2398]: I1008 20:14:38.419058 2398 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:14:38.457717 kubelet[2398]: E1008 20:14:38.457673 2398 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://49.13.72.235:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.457717 kubelet[2398]: I1008 20:14:38.457727 2398 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:14:38.471574 kubelet[2398]: I1008 20:14:38.471519 2398 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:14:38.473306 kubelet[2398]: I1008 20:14:38.473270 2398 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:14:38.473668 kubelet[2398]: I1008 20:14:38.473640 2398 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:14:38.473668 kubelet[2398]: I1008 20:14:38.473668 2398 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:14:38.473816 kubelet[2398]: I1008 20:14:38.473679 2398 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:14:38.475335 kubelet[2398]: I1008 20:14:38.475301 2398 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:14:38.478282 kubelet[2398]: I1008 20:14:38.478255 2398 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:14:38.478282 kubelet[2398]: I1008 20:14:38.478284 2398 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:14:38.478395 kubelet[2398]: I1008 20:14:38.478325 2398 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:14:38.478395 kubelet[2398]: I1008 20:14:38.478355 2398 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:14:38.480581 kubelet[2398]: W1008 20:14:38.480506 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.72.235:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.480628 kubelet[2398]: E1008 20:14:38.480592 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.72.235:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.480956 kubelet[2398]: W1008 20:14:38.480909 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.72.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-1-c965454201&limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.481009 kubelet[2398]: E1008 20:14:38.480960 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.72.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-1-c965454201&limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.481115 kubelet[2398]: I1008 20:14:38.481094 2398 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 20:14:38.481743 kubelet[2398]: I1008 20:14:38.481656 2398 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:14:38.482378 kubelet[2398]: W1008 20:14:38.482352 2398 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 20:14:38.483371 kubelet[2398]: I1008 20:14:38.483345 2398 server.go:1256] "Started kubelet" Oct 8 20:14:38.484925 kubelet[2398]: I1008 20:14:38.484905 2398 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:14:38.485320 kubelet[2398]: I1008 20:14:38.485286 2398 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:14:38.485464 kubelet[2398]: I1008 20:14:38.485451 2398 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:14:38.486283 kubelet[2398]: I1008 20:14:38.486263 2398 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:14:38.487220 kubelet[2398]: I1008 20:14:38.485461 2398 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:14:38.491983 kubelet[2398]: I1008 20:14:38.491960 2398 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:14:38.492895 kubelet[2398]: E1008 20:14:38.492860 2398 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.72.235:6443/api/v1/namespaces/default/events\": dial tcp 49.13.72.235:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975-2-2-1-c965454201.17fc9372d937be47 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975-2-2-1-c965454201,UID:ci-3975-2-2-1-c965454201,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975-2-2-1-c965454201,},FirstTimestamp:2024-10-08 20:14:38.483291719 +0000 UTC m=+1.097022825,LastTimestamp:2024-10-08 20:14:38.483291719 +0000 UTC m=+1.097022825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975-2-2-1-c965454201,}" Oct 8 20:14:38.494763 kubelet[2398]: I1008 20:14:38.494744 2398 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:14:38.494928 kubelet[2398]: I1008 20:14:38.494915 2398 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:14:38.496537 kubelet[2398]: E1008 20:14:38.496504 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.72.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-1-c965454201?timeout=10s\": dial tcp 49.13.72.235:6443: connect: connection refused" interval="200ms" Oct 8 20:14:38.496992 kubelet[2398]: I1008 20:14:38.496920 2398 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:14:38.497049 kubelet[2398]: I1008 20:14:38.497013 2398 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:14:38.499337 kubelet[2398]: W1008 20:14:38.498376 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.72.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.499337 kubelet[2398]: E1008 20:14:38.498422 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.72.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.499337 kubelet[2398]: I1008 20:14:38.498997 2398 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:14:38.506060 kubelet[2398]: I1008 20:14:38.505886 2398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:14:38.507578 kubelet[2398]: I1008 20:14:38.507560 2398 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:14:38.507679 kubelet[2398]: I1008 20:14:38.507669 2398 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:14:38.507741 kubelet[2398]: I1008 20:14:38.507733 2398 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:14:38.508048 kubelet[2398]: E1008 20:14:38.507821 2398 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:14:38.513875 kubelet[2398]: W1008 20:14:38.513723 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.72.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.513875 kubelet[2398]: E1008 20:14:38.513769 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.72.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:38.513875 kubelet[2398]: E1008 20:14:38.513863 2398 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:14:38.523145 kubelet[2398]: I1008 20:14:38.523110 2398 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:14:38.523145 kubelet[2398]: I1008 20:14:38.523147 2398 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:14:38.523277 kubelet[2398]: I1008 20:14:38.523195 2398 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:14:38.525791 kubelet[2398]: I1008 20:14:38.525747 2398 policy_none.go:49] "None policy: Start" Oct 8 20:14:38.526557 kubelet[2398]: I1008 20:14:38.526517 2398 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:14:38.526636 kubelet[2398]: I1008 20:14:38.526574 2398 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:14:38.533790 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 20:14:38.548091 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 20:14:38.552252 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 20:14:38.568767 kubelet[2398]: I1008 20:14:38.568526 2398 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:14:38.568943 kubelet[2398]: I1008 20:14:38.568915 2398 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:14:38.572730 kubelet[2398]: E1008 20:14:38.572574 2398 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975-2-2-1-c965454201\" not found" Oct 8 20:14:38.596227 kubelet[2398]: I1008 20:14:38.596191 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:38.597982 kubelet[2398]: E1008 20:14:38.597951 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.72.235:6443/api/v1/nodes\": dial tcp 49.13.72.235:6443: connect: connection refused" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:38.608222 kubelet[2398]: I1008 20:14:38.608187 2398 topology_manager.go:215] "Topology Admit Handler" podUID="7870ed714bc1a4d81533da1c223fc972" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.610278 kubelet[2398]: I1008 20:14:38.609979 2398 topology_manager.go:215] "Topology Admit Handler" podUID="174c8c678eabbf4e179e40c7f846dd67" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.613101 kubelet[2398]: I1008 20:14:38.612875 2398 topology_manager.go:215] "Topology Admit Handler" podUID="06832156d29f438b028b0138fc3232c4" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.622048 systemd[1]: Created slice kubepods-burstable-pod7870ed714bc1a4d81533da1c223fc972.slice - libcontainer container kubepods-burstable-pod7870ed714bc1a4d81533da1c223fc972.slice. Oct 8 20:14:38.653832 systemd[1]: Created slice kubepods-burstable-pod06832156d29f438b028b0138fc3232c4.slice - libcontainer container kubepods-burstable-pod06832156d29f438b028b0138fc3232c4.slice. Oct 8 20:14:38.659736 systemd[1]: Created slice kubepods-burstable-pod174c8c678eabbf4e179e40c7f846dd67.slice - libcontainer container kubepods-burstable-pod174c8c678eabbf4e179e40c7f846dd67.slice. Oct 8 20:14:38.697383 kubelet[2398]: E1008 20:14:38.697199 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.72.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-1-c965454201?timeout=10s\": dial tcp 49.13.72.235:6443: connect: connection refused" interval="400ms" Oct 8 20:14:38.797127 kubelet[2398]: I1008 20:14:38.796622 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/174c8c678eabbf4e179e40c7f846dd67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-1-c965454201\" (UID: \"174c8c678eabbf4e179e40c7f846dd67\") " pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797127 kubelet[2398]: I1008 20:14:38.796696 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797127 kubelet[2398]: I1008 20:14:38.796737 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797127 kubelet[2398]: I1008 20:14:38.796801 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797127 kubelet[2398]: I1008 20:14:38.796843 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/174c8c678eabbf4e179e40c7f846dd67-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-1-c965454201\" (UID: \"174c8c678eabbf4e179e40c7f846dd67\") " pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797679 kubelet[2398]: I1008 20:14:38.796883 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797679 kubelet[2398]: I1008 20:14:38.796930 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797679 kubelet[2398]: I1008 20:14:38.796968 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7870ed714bc1a4d81533da1c223fc972-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-1-c965454201\" (UID: \"7870ed714bc1a4d81533da1c223fc972\") " pod="kube-system/kube-scheduler-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.797679 kubelet[2398]: I1008 20:14:38.797006 2398 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/174c8c678eabbf4e179e40c7f846dd67-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-1-c965454201\" (UID: \"174c8c678eabbf4e179e40c7f846dd67\") " pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:38.802045 kubelet[2398]: I1008 20:14:38.801976 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:38.802512 kubelet[2398]: E1008 20:14:38.802460 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.72.235:6443/api/v1/nodes\": dial tcp 49.13.72.235:6443: connect: connection refused" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:38.945526 containerd[1477]: time="2024-10-08T20:14:38.945444029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-1-c965454201,Uid:7870ed714bc1a4d81533da1c223fc972,Namespace:kube-system,Attempt:0,}" Oct 8 20:14:38.957894 containerd[1477]: time="2024-10-08T20:14:38.957680829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-1-c965454201,Uid:06832156d29f438b028b0138fc3232c4,Namespace:kube-system,Attempt:0,}" Oct 8 20:14:38.965604 containerd[1477]: time="2024-10-08T20:14:38.965381881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-1-c965454201,Uid:174c8c678eabbf4e179e40c7f846dd67,Namespace:kube-system,Attempt:0,}" Oct 8 20:14:39.098790 kubelet[2398]: E1008 20:14:39.098720 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.72.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-1-c965454201?timeout=10s\": dial tcp 49.13.72.235:6443: connect: connection refused" interval="800ms" Oct 8 20:14:39.214719 kubelet[2398]: I1008 20:14:39.214479 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:39.215101 kubelet[2398]: E1008 20:14:39.214973 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.72.235:6443/api/v1/nodes\": dial tcp 49.13.72.235:6443: connect: connection refused" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:39.396811 kubelet[2398]: W1008 20:14:39.396664 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://49.13.72.235:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:39.396811 kubelet[2398]: E1008 20:14:39.396771 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://49.13.72.235:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:39.531639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682678901.mount: Deactivated successfully. Oct 8 20:14:39.536889 containerd[1477]: time="2024-10-08T20:14:39.536835033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:14:39.538062 containerd[1477]: time="2024-10-08T20:14:39.538016951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Oct 8 20:14:39.540482 containerd[1477]: time="2024-10-08T20:14:39.540443630Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:14:39.542325 containerd[1477]: time="2024-10-08T20:14:39.542125285Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:14:39.542325 containerd[1477]: time="2024-10-08T20:14:39.542200367Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:14:39.543256 containerd[1477]: time="2024-10-08T20:14:39.543221600Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:14:39.543930 containerd[1477]: time="2024-10-08T20:14:39.543880702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 20:14:39.550728 containerd[1477]: time="2024-10-08T20:14:39.550576759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 20:14:39.551774 containerd[1477]: time="2024-10-08T20:14:39.551217780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 605.619345ms" Oct 8 20:14:39.552220 containerd[1477]: time="2024-10-08T20:14:39.552059407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.546442ms" Oct 8 20:14:39.556458 containerd[1477]: time="2024-10-08T20:14:39.556406428Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 598.13322ms" Oct 8 20:14:39.666399 kubelet[2398]: W1008 20:14:39.666258 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://49.13.72.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-1-c965454201&limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:39.666399 kubelet[2398]: E1008 20:14:39.666374 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://49.13.72.235:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975-2-2-1-c965454201&limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:39.685669 containerd[1477]: time="2024-10-08T20:14:39.685394334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:14:39.685669 containerd[1477]: time="2024-10-08T20:14:39.685455256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:14:39.685669 containerd[1477]: time="2024-10-08T20:14:39.685473856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:14:39.685669 containerd[1477]: time="2024-10-08T20:14:39.685487977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:14:39.687710 containerd[1477]: time="2024-10-08T20:14:39.687532963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:14:39.687710 containerd[1477]: time="2024-10-08T20:14:39.687622766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:14:39.687710 containerd[1477]: time="2024-10-08T20:14:39.687643967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:14:39.687710 containerd[1477]: time="2024-10-08T20:14:39.687658327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:14:39.691906 kubelet[2398]: W1008 20:14:39.691810 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://49.13.72.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:39.691906 kubelet[2398]: E1008 20:14:39.691877 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://49.13.72.235:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:39.696795 containerd[1477]: time="2024-10-08T20:14:39.696417891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:14:39.696795 containerd[1477]: time="2024-10-08T20:14:39.696486893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:14:39.696795 containerd[1477]: time="2024-10-08T20:14:39.696505214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:14:39.696795 containerd[1477]: time="2024-10-08T20:14:39.696518414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:14:39.715175 systemd[1]: Started cri-containerd-56054e2ab806c11c82b9d94c937d48e9b9a0ebda9c05af69bcece70fdd0bfde7.scope - libcontainer container 56054e2ab806c11c82b9d94c937d48e9b9a0ebda9c05af69bcece70fdd0bfde7. Oct 8 20:14:39.723207 systemd[1]: Started cri-containerd-f488855d9339d7190082d5655287de60677cda4d6a01119c9f8d5d5a9fcf0562.scope - libcontainer container f488855d9339d7190082d5655287de60677cda4d6a01119c9f8d5d5a9fcf0562. Oct 8 20:14:39.728443 systemd[1]: Started cri-containerd-efe5d125ad85418a85339d90034a2c2e552e5c5abb774c960add7e137702d16a.scope - libcontainer container efe5d125ad85418a85339d90034a2c2e552e5c5abb774c960add7e137702d16a. Oct 8 20:14:39.778356 containerd[1477]: time="2024-10-08T20:14:39.777973378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975-2-2-1-c965454201,Uid:7870ed714bc1a4d81533da1c223fc972,Namespace:kube-system,Attempt:0,} returns sandbox id \"56054e2ab806c11c82b9d94c937d48e9b9a0ebda9c05af69bcece70fdd0bfde7\"" Oct 8 20:14:39.784378 containerd[1477]: time="2024-10-08T20:14:39.784216700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975-2-2-1-c965454201,Uid:174c8c678eabbf4e179e40c7f846dd67,Namespace:kube-system,Attempt:0,} returns sandbox id \"f488855d9339d7190082d5655287de60677cda4d6a01119c9f8d5d5a9fcf0562\"" Oct 8 20:14:39.790597 containerd[1477]: time="2024-10-08T20:14:39.790442982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975-2-2-1-c965454201,Uid:06832156d29f438b028b0138fc3232c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"efe5d125ad85418a85339d90034a2c2e552e5c5abb774c960add7e137702d16a\"" Oct 8 20:14:39.791392 containerd[1477]: time="2024-10-08T20:14:39.791117604Z" level=info msg="CreateContainer within sandbox \"f488855d9339d7190082d5655287de60677cda4d6a01119c9f8d5d5a9fcf0562\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 20:14:39.791483 containerd[1477]: time="2024-10-08T20:14:39.791412814Z" level=info msg="CreateContainer within sandbox \"56054e2ab806c11c82b9d94c937d48e9b9a0ebda9c05af69bcece70fdd0bfde7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 20:14:39.796072 containerd[1477]: time="2024-10-08T20:14:39.796004603Z" level=info msg="CreateContainer within sandbox \"efe5d125ad85418a85339d90034a2c2e552e5c5abb774c960add7e137702d16a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 20:14:39.807491 containerd[1477]: time="2024-10-08T20:14:39.807440934Z" level=info msg="CreateContainer within sandbox \"f488855d9339d7190082d5655287de60677cda4d6a01119c9f8d5d5a9fcf0562\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6869a4da7f46b0b198d28838683ea698014b162a151fabcaa6952f64ba4c5861\"" Oct 8 20:14:39.808199 containerd[1477]: time="2024-10-08T20:14:39.808170958Z" level=info msg="StartContainer for \"6869a4da7f46b0b198d28838683ea698014b162a151fabcaa6952f64ba4c5861\"" Oct 8 20:14:39.816013 containerd[1477]: time="2024-10-08T20:14:39.815973131Z" level=info msg="CreateContainer within sandbox \"56054e2ab806c11c82b9d94c937d48e9b9a0ebda9c05af69bcece70fdd0bfde7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"832d8c5cc86b81ed1105cd66e33c3a6332c44e34434e4cc87fd25989cdd774f6\"" Oct 8 20:14:39.818464 containerd[1477]: time="2024-10-08T20:14:39.816864720Z" level=info msg="StartContainer for \"832d8c5cc86b81ed1105cd66e33c3a6332c44e34434e4cc87fd25989cdd774f6\"" Oct 8 20:14:39.819828 containerd[1477]: time="2024-10-08T20:14:39.819692971Z" level=info msg="CreateContainer within sandbox \"efe5d125ad85418a85339d90034a2c2e552e5c5abb774c960add7e137702d16a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4f546e2059479e3cbfc396af50018fd57dd6c9bbb1f220eb6caead7367c276a1\"" Oct 8 20:14:39.820664 containerd[1477]: time="2024-10-08T20:14:39.820249869Z" level=info msg="StartContainer for \"4f546e2059479e3cbfc396af50018fd57dd6c9bbb1f220eb6caead7367c276a1\"" Oct 8 20:14:39.824900 systemd[1]: Started sshd@10-49.13.72.235:22-27.254.149.199:60834.service - OpenSSH per-connection server daemon (27.254.149.199:60834). Oct 8 20:14:39.848651 systemd[1]: Started cri-containerd-6869a4da7f46b0b198d28838683ea698014b162a151fabcaa6952f64ba4c5861.scope - libcontainer container 6869a4da7f46b0b198d28838683ea698014b162a151fabcaa6952f64ba4c5861. Oct 8 20:14:39.881626 systemd[1]: Started cri-containerd-4f546e2059479e3cbfc396af50018fd57dd6c9bbb1f220eb6caead7367c276a1.scope - libcontainer container 4f546e2059479e3cbfc396af50018fd57dd6c9bbb1f220eb6caead7367c276a1. Oct 8 20:14:39.885163 systemd[1]: Started cri-containerd-832d8c5cc86b81ed1105cd66e33c3a6332c44e34434e4cc87fd25989cdd774f6.scope - libcontainer container 832d8c5cc86b81ed1105cd66e33c3a6332c44e34434e4cc87fd25989cdd774f6. Oct 8 20:14:39.900126 kubelet[2398]: E1008 20:14:39.899904 2398 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.72.235:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975-2-2-1-c965454201?timeout=10s\": dial tcp 49.13.72.235:6443: connect: connection refused" interval="1.6s" Oct 8 20:14:39.911385 containerd[1477]: time="2024-10-08T20:14:39.911111778Z" level=info msg="StartContainer for \"6869a4da7f46b0b198d28838683ea698014b162a151fabcaa6952f64ba4c5861\" returns successfully" Oct 8 20:14:39.957640 containerd[1477]: time="2024-10-08T20:14:39.957593246Z" level=info msg="StartContainer for \"832d8c5cc86b81ed1105cd66e33c3a6332c44e34434e4cc87fd25989cdd774f6\" returns successfully" Oct 8 20:14:39.958129 containerd[1477]: time="2024-10-08T20:14:39.958021660Z" level=info msg="StartContainer for \"4f546e2059479e3cbfc396af50018fd57dd6c9bbb1f220eb6caead7367c276a1\" returns successfully" Oct 8 20:14:40.018101 kubelet[2398]: I1008 20:14:40.017815 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:40.018272 kubelet[2398]: E1008 20:14:40.018245 2398 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://49.13.72.235:6443/api/v1/nodes\": dial tcp 49.13.72.235:6443: connect: connection refused" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:40.038940 kubelet[2398]: W1008 20:14:40.038156 2398 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://49.13.72.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:40.038940 kubelet[2398]: E1008 20:14:40.038222 2398 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://49.13.72.235:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.72.235:6443: connect: connection refused Oct 8 20:14:40.925343 sshd[2570]: Invalid user mahtabrazmara from 27.254.149.199 port 60834 Oct 8 20:14:41.127556 sshd[2570]: Received disconnect from 27.254.149.199 port 60834:11: Bye Bye [preauth] Oct 8 20:14:41.127556 sshd[2570]: Disconnected from invalid user mahtabrazmara 27.254.149.199 port 60834 [preauth] Oct 8 20:14:41.129748 systemd[1]: sshd@10-49.13.72.235:22-27.254.149.199:60834.service: Deactivated successfully. Oct 8 20:14:41.622094 kubelet[2398]: I1008 20:14:41.622035 2398 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:42.389291 kubelet[2398]: E1008 20:14:42.389193 2398 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975-2-2-1-c965454201\" not found" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:42.425297 kubelet[2398]: I1008 20:14:42.425147 2398 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:42.482761 kubelet[2398]: I1008 20:14:42.482537 2398 apiserver.go:52] "Watching apiserver" Oct 8 20:14:42.496345 kubelet[2398]: I1008 20:14:42.495792 2398 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:14:45.553338 systemd[1]: Reloading requested from client PID 2686 ('systemctl') (unit session-7.scope)... Oct 8 20:14:45.553357 systemd[1]: Reloading... Oct 8 20:14:45.659409 zram_generator::config[2725]: No configuration found. Oct 8 20:14:45.764778 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 20:14:45.846384 systemd[1]: Reloading finished in 292 ms. Oct 8 20:14:45.897397 kubelet[2398]: I1008 20:14:45.897083 2398 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:14:45.897194 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:45.913937 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 20:14:45.914551 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:45.914706 systemd[1]: kubelet.service: Consumed 1.552s CPU time, 111.8M memory peak, 0B memory swap peak. Oct 8 20:14:45.922740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 20:14:46.077161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 20:14:46.091697 (kubelet)[2770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 20:14:46.146623 kubelet[2770]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:14:46.146623 kubelet[2770]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 20:14:46.146623 kubelet[2770]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 20:14:46.146623 kubelet[2770]: I1008 20:14:46.145635 2770 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 20:14:46.151427 kubelet[2770]: I1008 20:14:46.151393 2770 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 20:14:46.151427 kubelet[2770]: I1008 20:14:46.151421 2770 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 20:14:46.151686 kubelet[2770]: I1008 20:14:46.151670 2770 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 20:14:46.154414 kubelet[2770]: I1008 20:14:46.154316 2770 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 20:14:46.157135 kubelet[2770]: I1008 20:14:46.156954 2770 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 20:14:46.165757 kubelet[2770]: I1008 20:14:46.165727 2770 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 20:14:46.166230 kubelet[2770]: I1008 20:14:46.166216 2770 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 20:14:46.166567 kubelet[2770]: I1008 20:14:46.166539 2770 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 20:14:46.167396 kubelet[2770]: I1008 20:14:46.167240 2770 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 20:14:46.167612 kubelet[2770]: I1008 20:14:46.167457 2770 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 20:14:46.170408 kubelet[2770]: I1008 20:14:46.167726 2770 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:14:46.172897 kubelet[2770]: I1008 20:14:46.170748 2770 kubelet.go:396] "Attempting to sync node with API server" Oct 8 20:14:46.172897 kubelet[2770]: I1008 20:14:46.170773 2770 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 20:14:46.172897 kubelet[2770]: I1008 20:14:46.170798 2770 kubelet.go:312] "Adding apiserver pod source" Oct 8 20:14:46.172897 kubelet[2770]: I1008 20:14:46.170813 2770 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 20:14:46.172897 kubelet[2770]: I1008 20:14:46.172125 2770 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 20:14:46.172897 kubelet[2770]: I1008 20:14:46.172431 2770 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 20:14:46.173084 kubelet[2770]: I1008 20:14:46.172977 2770 server.go:1256] "Started kubelet" Oct 8 20:14:46.176123 kubelet[2770]: I1008 20:14:46.175983 2770 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 20:14:46.186827 kubelet[2770]: I1008 20:14:46.186785 2770 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 20:14:46.187774 kubelet[2770]: I1008 20:14:46.187743 2770 server.go:461] "Adding debug handlers to kubelet server" Oct 8 20:14:46.191389 kubelet[2770]: I1008 20:14:46.190550 2770 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 20:14:46.191720 kubelet[2770]: I1008 20:14:46.191703 2770 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 20:14:46.207354 kubelet[2770]: I1008 20:14:46.206695 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 20:14:46.209828 kubelet[2770]: I1008 20:14:46.209306 2770 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 20:14:46.209828 kubelet[2770]: I1008 20:14:46.209416 2770 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 20:14:46.209828 kubelet[2770]: I1008 20:14:46.209437 2770 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 20:14:46.209828 kubelet[2770]: E1008 20:14:46.209490 2770 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 20:14:46.215997 kubelet[2770]: I1008 20:14:46.215418 2770 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 20:14:46.228191 kubelet[2770]: I1008 20:14:46.228144 2770 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 20:14:46.229574 kubelet[2770]: I1008 20:14:46.228655 2770 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 20:14:46.242845 kubelet[2770]: I1008 20:14:46.240127 2770 factory.go:221] Registration of the containerd container factory successfully Oct 8 20:14:46.242845 kubelet[2770]: I1008 20:14:46.240150 2770 factory.go:221] Registration of the systemd container factory successfully Oct 8 20:14:46.242845 kubelet[2770]: I1008 20:14:46.240231 2770 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 20:14:46.245148 kubelet[2770]: E1008 20:14:46.245088 2770 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 20:14:46.310929 kubelet[2770]: E1008 20:14:46.310894 2770 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 20:14:46.323667 kubelet[2770]: I1008 20:14:46.323644 2770 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:46.325350 kubelet[2770]: I1008 20:14:46.325266 2770 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 20:14:46.325350 kubelet[2770]: I1008 20:14:46.325290 2770 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 20:14:46.326266 kubelet[2770]: I1008 20:14:46.325308 2770 state_mem.go:36] "Initialized new in-memory state store" Oct 8 20:14:46.326266 kubelet[2770]: I1008 20:14:46.325534 2770 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 20:14:46.326266 kubelet[2770]: I1008 20:14:46.325556 2770 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 20:14:46.326266 kubelet[2770]: I1008 20:14:46.325562 2770 policy_none.go:49] "None policy: Start" Oct 8 20:14:46.326266 kubelet[2770]: I1008 20:14:46.326127 2770 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 20:14:46.326266 kubelet[2770]: I1008 20:14:46.326157 2770 state_mem.go:35] "Initializing new in-memory state store" Oct 8 20:14:46.327645 kubelet[2770]: I1008 20:14:46.326631 2770 state_mem.go:75] "Updated machine memory state" Oct 8 20:14:46.332705 kubelet[2770]: I1008 20:14:46.332645 2770 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 20:14:46.335301 kubelet[2770]: I1008 20:14:46.335274 2770 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 20:14:46.341295 kubelet[2770]: I1008 20:14:46.341260 2770 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:46.341431 kubelet[2770]: I1008 20:14:46.341392 2770 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975-2-2-1-c965454201" Oct 8 20:14:46.514062 kubelet[2770]: I1008 20:14:46.511850 2770 topology_manager.go:215] "Topology Admit Handler" podUID="174c8c678eabbf4e179e40c7f846dd67" podNamespace="kube-system" podName="kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.514062 kubelet[2770]: I1008 20:14:46.511990 2770 topology_manager.go:215] "Topology Admit Handler" podUID="06832156d29f438b028b0138fc3232c4" podNamespace="kube-system" podName="kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.514062 kubelet[2770]: I1008 20:14:46.512097 2770 topology_manager.go:215] "Topology Admit Handler" podUID="7870ed714bc1a4d81533da1c223fc972" podNamespace="kube-system" podName="kube-scheduler-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.523050 kubelet[2770]: E1008 20:14:46.522546 2770 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-2-2-1-c965454201\" already exists" pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.525544 kubelet[2770]: E1008 20:14:46.525502 2770 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975-2-2-1-c965454201\" already exists" pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531347 kubelet[2770]: I1008 20:14:46.530805 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/174c8c678eabbf4e179e40c7f846dd67-ca-certs\") pod \"kube-apiserver-ci-3975-2-2-1-c965454201\" (UID: \"174c8c678eabbf4e179e40c7f846dd67\") " pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531347 kubelet[2770]: I1008 20:14:46.530859 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/174c8c678eabbf4e179e40c7f846dd67-k8s-certs\") pod \"kube-apiserver-ci-3975-2-2-1-c965454201\" (UID: \"174c8c678eabbf4e179e40c7f846dd67\") " pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531347 kubelet[2770]: I1008 20:14:46.530895 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/174c8c678eabbf4e179e40c7f846dd67-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975-2-2-1-c965454201\" (UID: \"174c8c678eabbf4e179e40c7f846dd67\") " pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531347 kubelet[2770]: I1008 20:14:46.530928 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-flexvolume-dir\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531347 kubelet[2770]: I1008 20:14:46.530963 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531571 kubelet[2770]: I1008 20:14:46.530996 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-ca-certs\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531571 kubelet[2770]: I1008 20:14:46.531039 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-k8s-certs\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531571 kubelet[2770]: I1008 20:14:46.531070 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06832156d29f438b028b0138fc3232c4-kubeconfig\") pod \"kube-controller-manager-ci-3975-2-2-1-c965454201\" (UID: \"06832156d29f438b028b0138fc3232c4\") " pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" Oct 8 20:14:46.531571 kubelet[2770]: I1008 20:14:46.531120 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7870ed714bc1a4d81533da1c223fc972-kubeconfig\") pod \"kube-scheduler-ci-3975-2-2-1-c965454201\" (UID: \"7870ed714bc1a4d81533da1c223fc972\") " pod="kube-system/kube-scheduler-ci-3975-2-2-1-c965454201" Oct 8 20:14:47.173123 kubelet[2770]: I1008 20:14:47.173039 2770 apiserver.go:52] "Watching apiserver" Oct 8 20:14:47.228656 kubelet[2770]: I1008 20:14:47.228619 2770 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 20:14:47.295784 kubelet[2770]: E1008 20:14:47.295259 2770 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975-2-2-1-c965454201\" already exists" pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" Oct 8 20:14:47.336344 kubelet[2770]: I1008 20:14:47.336284 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975-2-2-1-c965454201" podStartSLOduration=3.336232395 podStartE2EDuration="3.336232395s" podCreationTimestamp="2024-10-08 20:14:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:14:47.317965914 +0000 UTC m=+1.222557634" watchObservedRunningTime="2024-10-08 20:14:47.336232395 +0000 UTC m=+1.240824075" Oct 8 20:14:47.336575 kubelet[2770]: I1008 20:14:47.336421 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975-2-2-1-c965454201" podStartSLOduration=1.336403521 podStartE2EDuration="1.336403521s" podCreationTimestamp="2024-10-08 20:14:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:14:47.334519063 +0000 UTC m=+1.239110783" watchObservedRunningTime="2024-10-08 20:14:47.336403521 +0000 UTC m=+1.240995241" Oct 8 20:14:47.405745 kubelet[2770]: I1008 20:14:47.405699 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975-2-2-1-c965454201" podStartSLOduration=2.40565913 podStartE2EDuration="2.40565913s" podCreationTimestamp="2024-10-08 20:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:14:47.357256002 +0000 UTC m=+1.261847762" watchObservedRunningTime="2024-10-08 20:14:47.40565913 +0000 UTC m=+1.310250850" Oct 8 20:14:51.043808 sudo[1878]: pam_unix(sudo:session): session closed for user root Oct 8 20:14:51.200610 sshd[1875]: pam_unix(sshd:session): session closed for user core Oct 8 20:14:51.207651 systemd[1]: sshd@8-49.13.72.235:22-139.178.89.65:60986.service: Deactivated successfully. Oct 8 20:14:51.211530 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 20:14:51.211799 systemd[1]: session-7.scope: Consumed 6.179s CPU time, 133.8M memory peak, 0B memory swap peak. Oct 8 20:14:51.212788 systemd-logind[1457]: Session 7 logged out. Waiting for processes to exit. Oct 8 20:14:51.214731 systemd-logind[1457]: Removed session 7. Oct 8 20:15:00.210372 kubelet[2770]: I1008 20:15:00.210280 2770 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 20:15:00.211734 kubelet[2770]: I1008 20:15:00.210943 2770 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 20:15:00.211785 containerd[1477]: time="2024-10-08T20:15:00.210745724Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 20:15:00.394746 kubelet[2770]: I1008 20:15:00.393994 2770 topology_manager.go:215] "Topology Admit Handler" podUID="bf7fd42e-5c15-471a-998c-84464508fd4c" podNamespace="kube-system" podName="kube-proxy-j7j82" Oct 8 20:15:00.404473 systemd[1]: Created slice kubepods-besteffort-podbf7fd42e_5c15_471a_998c_84464508fd4c.slice - libcontainer container kubepods-besteffort-podbf7fd42e_5c15_471a_998c_84464508fd4c.slice. Oct 8 20:15:00.418894 kubelet[2770]: I1008 20:15:00.418852 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf7fd42e-5c15-471a-998c-84464508fd4c-xtables-lock\") pod \"kube-proxy-j7j82\" (UID: \"bf7fd42e-5c15-471a-998c-84464508fd4c\") " pod="kube-system/kube-proxy-j7j82" Oct 8 20:15:00.418894 kubelet[2770]: I1008 20:15:00.418897 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2xnt\" (UniqueName: \"kubernetes.io/projected/bf7fd42e-5c15-471a-998c-84464508fd4c-kube-api-access-l2xnt\") pod \"kube-proxy-j7j82\" (UID: \"bf7fd42e-5c15-471a-998c-84464508fd4c\") " pod="kube-system/kube-proxy-j7j82" Oct 8 20:15:00.419085 kubelet[2770]: I1008 20:15:00.418921 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf7fd42e-5c15-471a-998c-84464508fd4c-kube-proxy\") pod \"kube-proxy-j7j82\" (UID: \"bf7fd42e-5c15-471a-998c-84464508fd4c\") " pod="kube-system/kube-proxy-j7j82" Oct 8 20:15:00.419085 kubelet[2770]: I1008 20:15:00.418942 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf7fd42e-5c15-471a-998c-84464508fd4c-lib-modules\") pod \"kube-proxy-j7j82\" (UID: \"bf7fd42e-5c15-471a-998c-84464508fd4c\") " pod="kube-system/kube-proxy-j7j82" Oct 8 20:15:00.536246 kubelet[2770]: E1008 20:15:00.536044 2770 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Oct 8 20:15:00.536246 kubelet[2770]: E1008 20:15:00.536088 2770 projected.go:200] Error preparing data for projected volume kube-api-access-l2xnt for pod kube-system/kube-proxy-j7j82: configmap "kube-root-ca.crt" not found Oct 8 20:15:00.536246 kubelet[2770]: E1008 20:15:00.536167 2770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf7fd42e-5c15-471a-998c-84464508fd4c-kube-api-access-l2xnt podName:bf7fd42e-5c15-471a-998c-84464508fd4c nodeName:}" failed. No retries permitted until 2024-10-08 20:15:01.036144916 +0000 UTC m=+14.940736636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l2xnt" (UniqueName: "kubernetes.io/projected/bf7fd42e-5c15-471a-998c-84464508fd4c-kube-api-access-l2xnt") pod "kube-proxy-j7j82" (UID: "bf7fd42e-5c15-471a-998c-84464508fd4c") : configmap "kube-root-ca.crt" not found Oct 8 20:15:01.313415 containerd[1477]: time="2024-10-08T20:15:01.313366015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j7j82,Uid:bf7fd42e-5c15-471a-998c-84464508fd4c,Namespace:kube-system,Attempt:0,}" Oct 8 20:15:01.318468 kubelet[2770]: I1008 20:15:01.317539 2770 topology_manager.go:215] "Topology Admit Handler" podUID="7aba3037-7907-4b33-a5b2-86126af563c3" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-ttz44" Oct 8 20:15:01.336289 systemd[1]: Created slice kubepods-besteffort-pod7aba3037_7907_4b33_a5b2_86126af563c3.slice - libcontainer container kubepods-besteffort-pod7aba3037_7907_4b33_a5b2_86126af563c3.slice. Oct 8 20:15:01.351721 containerd[1477]: time="2024-10-08T20:15:01.351618950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:01.352106 containerd[1477]: time="2024-10-08T20:15:01.351683792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:01.352106 containerd[1477]: time="2024-10-08T20:15:01.351705632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:01.352106 containerd[1477]: time="2024-10-08T20:15:01.351720233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:01.378552 systemd[1]: Started cri-containerd-04c2e9a566bae886028b5a1568e3a00e4f9edbc98befb4fab14875ca7a7f600e.scope - libcontainer container 04c2e9a566bae886028b5a1568e3a00e4f9edbc98befb4fab14875ca7a7f600e. Oct 8 20:15:01.404115 containerd[1477]: time="2024-10-08T20:15:01.404048650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j7j82,Uid:bf7fd42e-5c15-471a-998c-84464508fd4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"04c2e9a566bae886028b5a1568e3a00e4f9edbc98befb4fab14875ca7a7f600e\"" Oct 8 20:15:01.409119 containerd[1477]: time="2024-10-08T20:15:01.409080474Z" level=info msg="CreateContainer within sandbox \"04c2e9a566bae886028b5a1568e3a00e4f9edbc98befb4fab14875ca7a7f600e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 20:15:01.425249 containerd[1477]: time="2024-10-08T20:15:01.425203775Z" level=info msg="CreateContainer within sandbox \"04c2e9a566bae886028b5a1568e3a00e4f9edbc98befb4fab14875ca7a7f600e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e94f820ad1f0d70dc1525f6a64687922dc3b53acd642f4822712f1fbfa37dac\"" Oct 8 20:15:01.425949 kubelet[2770]: I1008 20:15:01.425695 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkg8h\" (UniqueName: \"kubernetes.io/projected/7aba3037-7907-4b33-a5b2-86126af563c3-kube-api-access-wkg8h\") pod \"tigera-operator-5d56685c77-ttz44\" (UID: \"7aba3037-7907-4b33-a5b2-86126af563c3\") " pod="tigera-operator/tigera-operator-5d56685c77-ttz44" Oct 8 20:15:01.425949 kubelet[2770]: I1008 20:15:01.425735 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7aba3037-7907-4b33-a5b2-86126af563c3-var-lib-calico\") pod \"tigera-operator-5d56685c77-ttz44\" (UID: \"7aba3037-7907-4b33-a5b2-86126af563c3\") " pod="tigera-operator/tigera-operator-5d56685c77-ttz44" Oct 8 20:15:01.426033 containerd[1477]: time="2024-10-08T20:15:01.425953317Z" level=info msg="StartContainer for \"0e94f820ad1f0d70dc1525f6a64687922dc3b53acd642f4822712f1fbfa37dac\"" Oct 8 20:15:01.459565 systemd[1]: Started cri-containerd-0e94f820ad1f0d70dc1525f6a64687922dc3b53acd642f4822712f1fbfa37dac.scope - libcontainer container 0e94f820ad1f0d70dc1525f6a64687922dc3b53acd642f4822712f1fbfa37dac. Oct 8 20:15:01.492423 containerd[1477]: time="2024-10-08T20:15:01.492373258Z" level=info msg="StartContainer for \"0e94f820ad1f0d70dc1525f6a64687922dc3b53acd642f4822712f1fbfa37dac\" returns successfully" Oct 8 20:15:01.643750 containerd[1477]: time="2024-10-08T20:15:01.643688388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ttz44,Uid:7aba3037-7907-4b33-a5b2-86126af563c3,Namespace:tigera-operator,Attempt:0,}" Oct 8 20:15:01.676377 containerd[1477]: time="2024-10-08T20:15:01.676093675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:01.676626 containerd[1477]: time="2024-10-08T20:15:01.676251400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:01.676626 containerd[1477]: time="2024-10-08T20:15:01.676275240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:01.676626 containerd[1477]: time="2024-10-08T20:15:01.676286961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:01.696517 systemd[1]: Started cri-containerd-581080d1d109a0f850ec4c72ad27574abe657435ee663f11f2ffb43caf053144.scope - libcontainer container 581080d1d109a0f850ec4c72ad27574abe657435ee663f11f2ffb43caf053144. Oct 8 20:15:01.731747 containerd[1477]: time="2024-10-08T20:15:01.731681746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-ttz44,Uid:7aba3037-7907-4b33-a5b2-86126af563c3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"581080d1d109a0f850ec4c72ad27574abe657435ee663f11f2ffb43caf053144\"" Oct 8 20:15:01.734717 containerd[1477]: time="2024-10-08T20:15:01.734524187Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 20:15:02.133936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1506680188.mount: Deactivated successfully. Oct 8 20:15:03.632754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274726246.mount: Deactivated successfully. Oct 8 20:15:04.786944 containerd[1477]: time="2024-10-08T20:15:04.786864394Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:04.788111 containerd[1477]: time="2024-10-08T20:15:04.787948065Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485895" Oct 8 20:15:04.788854 containerd[1477]: time="2024-10-08T20:15:04.788801729Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:04.791131 containerd[1477]: time="2024-10-08T20:15:04.791069793Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:04.791924 containerd[1477]: time="2024-10-08T20:15:04.791816374Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 3.057237465s" Oct 8 20:15:04.791924 containerd[1477]: time="2024-10-08T20:15:04.791847895Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 20:15:04.794898 containerd[1477]: time="2024-10-08T20:15:04.794859460Z" level=info msg="CreateContainer within sandbox \"581080d1d109a0f850ec4c72ad27574abe657435ee663f11f2ffb43caf053144\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 20:15:04.818262 containerd[1477]: time="2024-10-08T20:15:04.818204240Z" level=info msg="CreateContainer within sandbox \"581080d1d109a0f850ec4c72ad27574abe657435ee663f11f2ffb43caf053144\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5092438464a63169ae8677533b1c29730e1a4aa46239f05d103d5538118abc6c\"" Oct 8 20:15:04.819067 containerd[1477]: time="2024-10-08T20:15:04.818830538Z" level=info msg="StartContainer for \"5092438464a63169ae8677533b1c29730e1a4aa46239f05d103d5538118abc6c\"" Oct 8 20:15:04.849497 systemd[1]: Started cri-containerd-5092438464a63169ae8677533b1c29730e1a4aa46239f05d103d5538118abc6c.scope - libcontainer container 5092438464a63169ae8677533b1c29730e1a4aa46239f05d103d5538118abc6c. Oct 8 20:15:04.885355 containerd[1477]: time="2024-10-08T20:15:04.883511207Z" level=info msg="StartContainer for \"5092438464a63169ae8677533b1c29730e1a4aa46239f05d103d5538118abc6c\" returns successfully" Oct 8 20:15:05.329117 kubelet[2770]: I1008 20:15:05.329074 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-j7j82" podStartSLOduration=5.329031088 podStartE2EDuration="5.329031088s" podCreationTimestamp="2024-10-08 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:15:02.327869368 +0000 UTC m=+16.232461088" watchObservedRunningTime="2024-10-08 20:15:05.329031088 +0000 UTC m=+19.233622808" Oct 8 20:15:05.330130 kubelet[2770]: I1008 20:15:05.329167 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-ttz44" podStartSLOduration=1.270493585 podStartE2EDuration="4.329147491s" podCreationTimestamp="2024-10-08 20:15:01 +0000 UTC" firstStartedPulling="2024-10-08 20:15:01.733438196 +0000 UTC m=+15.638029916" lastFinishedPulling="2024-10-08 20:15:04.792092102 +0000 UTC m=+18.696683822" observedRunningTime="2024-10-08 20:15:05.328907124 +0000 UTC m=+19.233498884" watchObservedRunningTime="2024-10-08 20:15:05.329147491 +0000 UTC m=+19.233739211" Oct 8 20:15:09.345369 kubelet[2770]: I1008 20:15:09.345304 2770 topology_manager.go:215] "Topology Admit Handler" podUID="6507be53-9b13-4f86-9827-8d53752aea25" podNamespace="calico-system" podName="calico-typha-6f54f477d7-6lhqd" Oct 8 20:15:09.355606 systemd[1]: Created slice kubepods-besteffort-pod6507be53_9b13_4f86_9827_8d53752aea25.slice - libcontainer container kubepods-besteffort-pod6507be53_9b13_4f86_9827_8d53752aea25.slice. Oct 8 20:15:09.381009 kubelet[2770]: I1008 20:15:09.379560 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5hbd\" (UniqueName: \"kubernetes.io/projected/6507be53-9b13-4f86-9827-8d53752aea25-kube-api-access-g5hbd\") pod \"calico-typha-6f54f477d7-6lhqd\" (UID: \"6507be53-9b13-4f86-9827-8d53752aea25\") " pod="calico-system/calico-typha-6f54f477d7-6lhqd" Oct 8 20:15:09.381009 kubelet[2770]: I1008 20:15:09.379621 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6507be53-9b13-4f86-9827-8d53752aea25-tigera-ca-bundle\") pod \"calico-typha-6f54f477d7-6lhqd\" (UID: \"6507be53-9b13-4f86-9827-8d53752aea25\") " pod="calico-system/calico-typha-6f54f477d7-6lhqd" Oct 8 20:15:09.381009 kubelet[2770]: I1008 20:15:09.380906 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6507be53-9b13-4f86-9827-8d53752aea25-typha-certs\") pod \"calico-typha-6f54f477d7-6lhqd\" (UID: \"6507be53-9b13-4f86-9827-8d53752aea25\") " pod="calico-system/calico-typha-6f54f477d7-6lhqd" Oct 8 20:15:09.440342 kubelet[2770]: I1008 20:15:09.439947 2770 topology_manager.go:215] "Topology Admit Handler" podUID="0326db4c-1308-45e5-9dd3-48374e879632" podNamespace="calico-system" podName="calico-node-55kvf" Oct 8 20:15:09.451604 systemd[1]: Created slice kubepods-besteffort-pod0326db4c_1308_45e5_9dd3_48374e879632.slice - libcontainer container kubepods-besteffort-pod0326db4c_1308_45e5_9dd3_48374e879632.slice. Oct 8 20:15:09.482001 kubelet[2770]: I1008 20:15:09.481237 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-cni-bin-dir\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482001 kubelet[2770]: I1008 20:15:09.481372 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-var-run-calico\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482001 kubelet[2770]: I1008 20:15:09.481440 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-policysync\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482001 kubelet[2770]: I1008 20:15:09.481513 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-xtables-lock\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482001 kubelet[2770]: I1008 20:15:09.481556 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gjk6\" (UniqueName: \"kubernetes.io/projected/0326db4c-1308-45e5-9dd3-48374e879632-kube-api-access-6gjk6\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482338 kubelet[2770]: I1008 20:15:09.481598 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-lib-modules\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482338 kubelet[2770]: I1008 20:15:09.481789 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-var-lib-calico\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482678 kubelet[2770]: I1008 20:15:09.481837 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-flexvol-driver-host\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482836 kubelet[2770]: I1008 20:15:09.482820 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-cni-net-dir\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.482961 kubelet[2770]: I1008 20:15:09.482946 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0326db4c-1308-45e5-9dd3-48374e879632-cni-log-dir\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.483115 kubelet[2770]: I1008 20:15:09.483100 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0326db4c-1308-45e5-9dd3-48374e879632-tigera-ca-bundle\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.483255 kubelet[2770]: I1008 20:15:09.483239 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0326db4c-1308-45e5-9dd3-48374e879632-node-certs\") pod \"calico-node-55kvf\" (UID: \"0326db4c-1308-45e5-9dd3-48374e879632\") " pod="calico-system/calico-node-55kvf" Oct 8 20:15:09.560001 kubelet[2770]: I1008 20:15:09.559921 2770 topology_manager.go:215] "Topology Admit Handler" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" podNamespace="calico-system" podName="csi-node-driver-mkwvt" Oct 8 20:15:09.560387 kubelet[2770]: E1008 20:15:09.560218 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:09.584588 kubelet[2770]: I1008 20:15:09.584078 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7004f452-d92e-454b-be56-1d3a59702cfb-kubelet-dir\") pod \"csi-node-driver-mkwvt\" (UID: \"7004f452-d92e-454b-be56-1d3a59702cfb\") " pod="calico-system/csi-node-driver-mkwvt" Oct 8 20:15:09.584588 kubelet[2770]: I1008 20:15:09.584135 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7004f452-d92e-454b-be56-1d3a59702cfb-socket-dir\") pod \"csi-node-driver-mkwvt\" (UID: \"7004f452-d92e-454b-be56-1d3a59702cfb\") " pod="calico-system/csi-node-driver-mkwvt" Oct 8 20:15:09.584588 kubelet[2770]: I1008 20:15:09.584164 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfnh2\" (UniqueName: \"kubernetes.io/projected/7004f452-d92e-454b-be56-1d3a59702cfb-kube-api-access-tfnh2\") pod \"csi-node-driver-mkwvt\" (UID: \"7004f452-d92e-454b-be56-1d3a59702cfb\") " pod="calico-system/csi-node-driver-mkwvt" Oct 8 20:15:09.584588 kubelet[2770]: I1008 20:15:09.584199 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7004f452-d92e-454b-be56-1d3a59702cfb-registration-dir\") pod \"csi-node-driver-mkwvt\" (UID: \"7004f452-d92e-454b-be56-1d3a59702cfb\") " pod="calico-system/csi-node-driver-mkwvt" Oct 8 20:15:09.584588 kubelet[2770]: I1008 20:15:09.584262 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7004f452-d92e-454b-be56-1d3a59702cfb-varrun\") pod \"csi-node-driver-mkwvt\" (UID: \"7004f452-d92e-454b-be56-1d3a59702cfb\") " pod="calico-system/csi-node-driver-mkwvt" Oct 8 20:15:09.586084 kubelet[2770]: E1008 20:15:09.585972 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.587467 kubelet[2770]: W1008 20:15:09.587446 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.587691 kubelet[2770]: E1008 20:15:09.587588 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.589577 kubelet[2770]: E1008 20:15:09.589538 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.589577 kubelet[2770]: W1008 20:15:09.589556 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.589937 kubelet[2770]: E1008 20:15:09.589791 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.590073 kubelet[2770]: E1008 20:15:09.590059 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.590188 kubelet[2770]: W1008 20:15:09.590130 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.590230 kubelet[2770]: E1008 20:15:09.590183 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.590480 kubelet[2770]: E1008 20:15:09.590424 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.590480 kubelet[2770]: W1008 20:15:09.590435 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.590480 kubelet[2770]: E1008 20:15:09.590465 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.591554 kubelet[2770]: E1008 20:15:09.591485 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.591554 kubelet[2770]: W1008 20:15:09.591500 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.591645 kubelet[2770]: E1008 20:15:09.591559 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.595344 kubelet[2770]: E1008 20:15:09.593819 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.595344 kubelet[2770]: W1008 20:15:09.593834 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.595344 kubelet[2770]: E1008 20:15:09.593880 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.597545 kubelet[2770]: E1008 20:15:09.597370 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.597545 kubelet[2770]: W1008 20:15:09.597387 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.597545 kubelet[2770]: E1008 20:15:09.597431 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.598903 kubelet[2770]: E1008 20:15:09.597713 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.598903 kubelet[2770]: W1008 20:15:09.597724 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.598903 kubelet[2770]: E1008 20:15:09.597775 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.599143 kubelet[2770]: E1008 20:15:09.599080 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.599143 kubelet[2770]: W1008 20:15:09.599094 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.599205 kubelet[2770]: E1008 20:15:09.599142 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.601269 kubelet[2770]: E1008 20:15:09.601248 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.601451 kubelet[2770]: W1008 20:15:09.601370 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.601451 kubelet[2770]: E1008 20:15:09.601401 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.602076 kubelet[2770]: E1008 20:15:09.601983 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.602076 kubelet[2770]: W1008 20:15:09.602001 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.602076 kubelet[2770]: E1008 20:15:09.602024 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.603638 kubelet[2770]: E1008 20:15:09.602213 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.603638 kubelet[2770]: W1008 20:15:09.602250 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.603638 kubelet[2770]: E1008 20:15:09.602273 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.603638 kubelet[2770]: E1008 20:15:09.602450 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.603638 kubelet[2770]: W1008 20:15:09.602458 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.603638 kubelet[2770]: E1008 20:15:09.602469 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.603963 kubelet[2770]: E1008 20:15:09.603942 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.603963 kubelet[2770]: W1008 20:15:09.603963 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.604095 kubelet[2770]: E1008 20:15:09.604028 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.604160 kubelet[2770]: E1008 20:15:09.604128 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.604160 kubelet[2770]: W1008 20:15:09.604136 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.604249 kubelet[2770]: E1008 20:15:09.604179 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.604396 kubelet[2770]: E1008 20:15:09.604380 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.604396 kubelet[2770]: W1008 20:15:09.604392 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.604526 kubelet[2770]: E1008 20:15:09.604446 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.604755 kubelet[2770]: E1008 20:15:09.604736 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.604755 kubelet[2770]: W1008 20:15:09.604749 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.604939 kubelet[2770]: E1008 20:15:09.604807 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.605141 kubelet[2770]: E1008 20:15:09.605119 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.605141 kubelet[2770]: W1008 20:15:09.605132 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.605220 kubelet[2770]: E1008 20:15:09.605152 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.606424 kubelet[2770]: E1008 20:15:09.606404 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.606424 kubelet[2770]: W1008 20:15:09.606420 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.606543 kubelet[2770]: E1008 20:15:09.606440 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.606635 kubelet[2770]: E1008 20:15:09.606624 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.606635 kubelet[2770]: W1008 20:15:09.606634 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.606694 kubelet[2770]: E1008 20:15:09.606650 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.607941 kubelet[2770]: E1008 20:15:09.606807 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.607941 kubelet[2770]: W1008 20:15:09.606815 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.607941 kubelet[2770]: E1008 20:15:09.606826 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.618812 kubelet[2770]: E1008 20:15:09.618766 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.618812 kubelet[2770]: W1008 20:15:09.618790 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.618812 kubelet[2770]: E1008 20:15:09.618811 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.660447 containerd[1477]: time="2024-10-08T20:15:09.660033595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f54f477d7-6lhqd,Uid:6507be53-9b13-4f86-9827-8d53752aea25,Namespace:calico-system,Attempt:0,}" Oct 8 20:15:09.686132 kubelet[2770]: E1008 20:15:09.685713 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.686132 kubelet[2770]: W1008 20:15:09.685735 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.686132 kubelet[2770]: E1008 20:15:09.685986 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.687350 kubelet[2770]: E1008 20:15:09.686976 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.687350 kubelet[2770]: W1008 20:15:09.686988 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.687350 kubelet[2770]: E1008 20:15:09.687003 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.687606 kubelet[2770]: E1008 20:15:09.687592 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.688368 kubelet[2770]: W1008 20:15:09.688347 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.688557 kubelet[2770]: E1008 20:15:09.688495 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.690489 kubelet[2770]: E1008 20:15:09.690398 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.690489 kubelet[2770]: W1008 20:15:09.690431 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.690489 kubelet[2770]: E1008 20:15:09.690449 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.690815 kubelet[2770]: E1008 20:15:09.690659 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.690815 kubelet[2770]: W1008 20:15:09.690671 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.690815 kubelet[2770]: E1008 20:15:09.690682 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.690815 kubelet[2770]: E1008 20:15:09.690802 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.690815 kubelet[2770]: W1008 20:15:09.690814 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.691021 kubelet[2770]: E1008 20:15:09.690854 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.691541 kubelet[2770]: E1008 20:15:09.691223 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.691541 kubelet[2770]: W1008 20:15:09.691235 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.693335 kubelet[2770]: E1008 20:15:09.692527 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.693335 kubelet[2770]: E1008 20:15:09.692592 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.694635 kubelet[2770]: W1008 20:15:09.692607 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.694635 kubelet[2770]: E1008 20:15:09.694179 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.694635 kubelet[2770]: E1008 20:15:09.694460 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.694635 kubelet[2770]: W1008 20:15:09.694480 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.698697 kubelet[2770]: E1008 20:15:09.695771 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.698697 kubelet[2770]: W1008 20:15:09.695788 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.698697 kubelet[2770]: E1008 20:15:09.698565 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.698697 kubelet[2770]: W1008 20:15:09.698579 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.698697 kubelet[2770]: E1008 20:15:09.698633 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.698697 kubelet[2770]: E1008 20:15:09.698663 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.698697 kubelet[2770]: E1008 20:15:09.698682 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.698910 kubelet[2770]: E1008 20:15:09.698743 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.698910 kubelet[2770]: W1008 20:15:09.698751 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.699038 kubelet[2770]: E1008 20:15:09.698974 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.699215 kubelet[2770]: E1008 20:15:09.699199 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.699260 kubelet[2770]: W1008 20:15:09.699217 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.700370 kubelet[2770]: E1008 20:15:09.699337 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.700506 kubelet[2770]: E1008 20:15:09.699514 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.700635 kubelet[2770]: W1008 20:15:09.700560 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.700635 kubelet[2770]: E1008 20:15:09.700590 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.701083 kubelet[2770]: E1008 20:15:09.700863 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.701083 kubelet[2770]: W1008 20:15:09.700875 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.701083 kubelet[2770]: E1008 20:15:09.700908 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.701083 kubelet[2770]: E1008 20:15:09.701023 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.701083 kubelet[2770]: W1008 20:15:09.701031 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.701083 kubelet[2770]: E1008 20:15:09.701073 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.701335 kubelet[2770]: E1008 20:15:09.701286 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.701335 kubelet[2770]: W1008 20:15:09.701297 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.701504 kubelet[2770]: E1008 20:15:09.701493 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.701671 kubelet[2770]: E1008 20:15:09.701662 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.701799 kubelet[2770]: W1008 20:15:09.701725 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.701799 kubelet[2770]: E1008 20:15:09.701752 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.704481 kubelet[2770]: E1008 20:15:09.704425 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.704481 kubelet[2770]: W1008 20:15:09.704477 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.704586 kubelet[2770]: E1008 20:15:09.704500 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.704824 kubelet[2770]: E1008 20:15:09.704789 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.704964 kubelet[2770]: W1008 20:15:09.704831 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.704964 kubelet[2770]: E1008 20:15:09.704848 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.708336 kubelet[2770]: E1008 20:15:09.705071 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.708336 kubelet[2770]: W1008 20:15:09.705084 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.708336 kubelet[2770]: E1008 20:15:09.705106 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.708576 kubelet[2770]: E1008 20:15:09.708560 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.708699 kubelet[2770]: W1008 20:15:09.708626 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.708699 kubelet[2770]: E1008 20:15:09.708679 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.709019 kubelet[2770]: E1008 20:15:09.709007 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.709086 kubelet[2770]: W1008 20:15:09.709075 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.709213 kubelet[2770]: E1008 20:15:09.709145 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.709455 kubelet[2770]: E1008 20:15:09.709442 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.709817 kubelet[2770]: W1008 20:15:09.709521 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.709817 kubelet[2770]: E1008 20:15:09.709547 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.709900 kubelet[2770]: E1008 20:15:09.709840 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.709900 kubelet[2770]: W1008 20:15:09.709869 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.709900 kubelet[2770]: E1008 20:15:09.709884 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.720639 containerd[1477]: time="2024-10-08T20:15:09.720529115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:09.720639 containerd[1477]: time="2024-10-08T20:15:09.720592036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:09.721000 containerd[1477]: time="2024-10-08T20:15:09.720702999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:09.721054 containerd[1477]: time="2024-10-08T20:15:09.721027328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:09.729455 kubelet[2770]: E1008 20:15:09.729427 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:09.729603 kubelet[2770]: W1008 20:15:09.729588 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:09.729675 kubelet[2770]: E1008 20:15:09.729664 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:09.744494 systemd[1]: Started cri-containerd-003cea7ab3774a3da95765ad8cc24b0a506aa2a921dc61aa59a63ab42b064072.scope - libcontainer container 003cea7ab3774a3da95765ad8cc24b0a506aa2a921dc61aa59a63ab42b064072. Oct 8 20:15:09.757010 containerd[1477]: time="2024-10-08T20:15:09.756968846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-55kvf,Uid:0326db4c-1308-45e5-9dd3-48374e879632,Namespace:calico-system,Attempt:0,}" Oct 8 20:15:09.788934 containerd[1477]: time="2024-10-08T20:15:09.788677927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:09.788934 containerd[1477]: time="2024-10-08T20:15:09.788731768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:09.788934 containerd[1477]: time="2024-10-08T20:15:09.788745049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:09.788934 containerd[1477]: time="2024-10-08T20:15:09.788754329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:09.814566 systemd[1]: Started cri-containerd-f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748.scope - libcontainer container f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748. Oct 8 20:15:09.820104 containerd[1477]: time="2024-10-08T20:15:09.819610346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6f54f477d7-6lhqd,Uid:6507be53-9b13-4f86-9827-8d53752aea25,Namespace:calico-system,Attempt:0,} returns sandbox id \"003cea7ab3774a3da95765ad8cc24b0a506aa2a921dc61aa59a63ab42b064072\"" Oct 8 20:15:09.823228 containerd[1477]: time="2024-10-08T20:15:09.823073562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 20:15:09.861260 containerd[1477]: time="2024-10-08T20:15:09.860646485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-55kvf,Uid:0326db4c-1308-45e5-9dd3-48374e879632,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748\"" Oct 8 20:15:11.210658 kubelet[2770]: E1008 20:15:11.210530 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:12.487343 containerd[1477]: time="2024-10-08T20:15:12.486425992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:12.487343 containerd[1477]: time="2024-10-08T20:15:12.487290736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 20:15:12.489023 containerd[1477]: time="2024-10-08T20:15:12.488850179Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:12.493424 containerd[1477]: time="2024-10-08T20:15:12.493354423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:12.497185 containerd[1477]: time="2024-10-08T20:15:12.495357958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.672246594s" Oct 8 20:15:12.497185 containerd[1477]: time="2024-10-08T20:15:12.495389999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 20:15:12.498863 containerd[1477]: time="2024-10-08T20:15:12.498841934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 20:15:12.514033 containerd[1477]: time="2024-10-08T20:15:12.513990150Z" level=info msg="CreateContainer within sandbox \"003cea7ab3774a3da95765ad8cc24b0a506aa2a921dc61aa59a63ab42b064072\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 20:15:12.533593 containerd[1477]: time="2024-10-08T20:15:12.533539288Z" level=info msg="CreateContainer within sandbox \"003cea7ab3774a3da95765ad8cc24b0a506aa2a921dc61aa59a63ab42b064072\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fddea3e5e985e9d6bc1768af474b86586213918df93558ae15cf5906c83e846e\"" Oct 8 20:15:12.534803 containerd[1477]: time="2024-10-08T20:15:12.534763801Z" level=info msg="StartContainer for \"fddea3e5e985e9d6bc1768af474b86586213918df93558ae15cf5906c83e846e\"" Oct 8 20:15:12.574579 systemd[1]: Started cri-containerd-fddea3e5e985e9d6bc1768af474b86586213918df93558ae15cf5906c83e846e.scope - libcontainer container fddea3e5e985e9d6bc1768af474b86586213918df93558ae15cf5906c83e846e. Oct 8 20:15:12.615878 containerd[1477]: time="2024-10-08T20:15:12.615773669Z" level=info msg="StartContainer for \"fddea3e5e985e9d6bc1768af474b86586213918df93558ae15cf5906c83e846e\" returns successfully" Oct 8 20:15:13.210621 kubelet[2770]: E1008 20:15:13.210519 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:13.402697 kubelet[2770]: E1008 20:15:13.402654 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.402817 kubelet[2770]: W1008 20:15:13.402690 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.402817 kubelet[2770]: E1008 20:15:13.402743 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.403079 kubelet[2770]: E1008 20:15:13.403062 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.403182 kubelet[2770]: W1008 20:15:13.403083 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.403182 kubelet[2770]: E1008 20:15:13.403109 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.403737 kubelet[2770]: E1008 20:15:13.403596 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.403777 kubelet[2770]: W1008 20:15:13.403745 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.403802 kubelet[2770]: E1008 20:15:13.403776 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.404299 kubelet[2770]: E1008 20:15:13.404277 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.404395 kubelet[2770]: W1008 20:15:13.404372 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.404500 kubelet[2770]: E1008 20:15:13.404408 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.404956 kubelet[2770]: E1008 20:15:13.404936 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.405028 kubelet[2770]: W1008 20:15:13.404960 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.405028 kubelet[2770]: E1008 20:15:13.404986 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.405413 kubelet[2770]: E1008 20:15:13.405393 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.405458 kubelet[2770]: W1008 20:15:13.405419 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.405458 kubelet[2770]: E1008 20:15:13.405443 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.405904 kubelet[2770]: E1008 20:15:13.405817 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.405904 kubelet[2770]: W1008 20:15:13.405892 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.406034 kubelet[2770]: E1008 20:15:13.405927 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.406381 kubelet[2770]: E1008 20:15:13.406358 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.406437 kubelet[2770]: W1008 20:15:13.406383 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.406437 kubelet[2770]: E1008 20:15:13.406411 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.406824 kubelet[2770]: E1008 20:15:13.406803 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.406884 kubelet[2770]: W1008 20:15:13.406826 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.406884 kubelet[2770]: E1008 20:15:13.406851 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.407194 kubelet[2770]: E1008 20:15:13.407175 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.407267 kubelet[2770]: W1008 20:15:13.407213 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.407267 kubelet[2770]: E1008 20:15:13.407240 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.407599 kubelet[2770]: E1008 20:15:13.407581 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.407646 kubelet[2770]: W1008 20:15:13.407601 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.407646 kubelet[2770]: E1008 20:15:13.407628 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.407930 kubelet[2770]: E1008 20:15:13.407912 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.407991 kubelet[2770]: W1008 20:15:13.407932 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.407991 kubelet[2770]: E1008 20:15:13.407956 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.408284 kubelet[2770]: E1008 20:15:13.408265 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.408388 kubelet[2770]: W1008 20:15:13.408286 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.408388 kubelet[2770]: E1008 20:15:13.408377 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.408672 kubelet[2770]: E1008 20:15:13.408656 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.408723 kubelet[2770]: W1008 20:15:13.408675 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.408723 kubelet[2770]: E1008 20:15:13.408698 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.409040 kubelet[2770]: E1008 20:15:13.409021 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.409099 kubelet[2770]: W1008 20:15:13.409042 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.409099 kubelet[2770]: E1008 20:15:13.409067 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.425696 kubelet[2770]: E1008 20:15:13.425572 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.425696 kubelet[2770]: W1008 20:15:13.425606 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.425696 kubelet[2770]: E1008 20:15:13.425645 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.426143 kubelet[2770]: E1008 20:15:13.426108 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.426143 kubelet[2770]: W1008 20:15:13.426137 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.426293 kubelet[2770]: E1008 20:15:13.426172 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.426563 kubelet[2770]: E1008 20:15:13.426536 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.426563 kubelet[2770]: W1008 20:15:13.426561 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.426715 kubelet[2770]: E1008 20:15:13.426594 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.426884 kubelet[2770]: E1008 20:15:13.426866 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.426958 kubelet[2770]: W1008 20:15:13.426885 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.426958 kubelet[2770]: E1008 20:15:13.426925 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.427303 kubelet[2770]: E1008 20:15:13.427279 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.427303 kubelet[2770]: W1008 20:15:13.427298 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.427474 kubelet[2770]: E1008 20:15:13.427420 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.428494 kubelet[2770]: E1008 20:15:13.428458 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.428494 kubelet[2770]: W1008 20:15:13.428494 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.428789 kubelet[2770]: E1008 20:15:13.428530 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.429172 kubelet[2770]: E1008 20:15:13.428973 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.429172 kubelet[2770]: W1008 20:15:13.428996 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.430085 kubelet[2770]: E1008 20:15:13.429711 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.430085 kubelet[2770]: W1008 20:15:13.429740 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.430085 kubelet[2770]: E1008 20:15:13.429769 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.430447 kubelet[2770]: E1008 20:15:13.430019 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.431081 kubelet[2770]: E1008 20:15:13.430866 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.431081 kubelet[2770]: W1008 20:15:13.430898 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.431081 kubelet[2770]: E1008 20:15:13.430937 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.431804 kubelet[2770]: E1008 20:15:13.431730 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.431804 kubelet[2770]: W1008 20:15:13.431751 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.431804 kubelet[2770]: E1008 20:15:13.431776 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.432851 kubelet[2770]: E1008 20:15:13.432536 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.432851 kubelet[2770]: W1008 20:15:13.432558 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.432851 kubelet[2770]: E1008 20:15:13.432601 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.435895 kubelet[2770]: E1008 20:15:13.435604 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.435895 kubelet[2770]: W1008 20:15:13.435637 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.436253 kubelet[2770]: E1008 20:15:13.436015 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.436253 kubelet[2770]: E1008 20:15:13.436113 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.436253 kubelet[2770]: W1008 20:15:13.436126 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.436253 kubelet[2770]: E1008 20:15:13.436146 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.436857 kubelet[2770]: E1008 20:15:13.436720 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.436857 kubelet[2770]: W1008 20:15:13.436736 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.436857 kubelet[2770]: E1008 20:15:13.436763 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.438841 kubelet[2770]: E1008 20:15:13.438725 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.438841 kubelet[2770]: W1008 20:15:13.438744 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.438841 kubelet[2770]: E1008 20:15:13.438774 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.439453 kubelet[2770]: E1008 20:15:13.439198 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.439453 kubelet[2770]: W1008 20:15:13.439213 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.439453 kubelet[2770]: E1008 20:15:13.439227 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.440699 kubelet[2770]: E1008 20:15:13.440674 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.440699 kubelet[2770]: W1008 20:15:13.440693 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.441088 kubelet[2770]: E1008 20:15:13.440712 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:13.442364 kubelet[2770]: E1008 20:15:13.441448 2770 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 20:15:13.442364 kubelet[2770]: W1008 20:15:13.441468 2770 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 20:15:13.442364 kubelet[2770]: E1008 20:15:13.441526 2770 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 20:15:14.160499 containerd[1477]: time="2024-10-08T20:15:14.160433677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:14.161824 containerd[1477]: time="2024-10-08T20:15:14.161780354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 20:15:14.163004 containerd[1477]: time="2024-10-08T20:15:14.162947826Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:14.164873 containerd[1477]: time="2024-10-08T20:15:14.164743435Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:14.165864 containerd[1477]: time="2024-10-08T20:15:14.165455054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.666470917s" Oct 8 20:15:14.165864 containerd[1477]: time="2024-10-08T20:15:14.165494975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 20:15:14.168503 containerd[1477]: time="2024-10-08T20:15:14.168460136Z" level=info msg="CreateContainer within sandbox \"f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 20:15:14.191004 containerd[1477]: time="2024-10-08T20:15:14.190930671Z" level=info msg="CreateContainer within sandbox \"f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3\"" Oct 8 20:15:14.192419 containerd[1477]: time="2024-10-08T20:15:14.192373630Z" level=info msg="StartContainer for \"a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3\"" Oct 8 20:15:14.227471 systemd[1]: Started cri-containerd-a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3.scope - libcontainer container a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3. Oct 8 20:15:14.274895 containerd[1477]: time="2024-10-08T20:15:14.274809003Z" level=info msg="StartContainer for \"a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3\" returns successfully" Oct 8 20:15:14.301356 systemd[1]: cri-containerd-a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3.scope: Deactivated successfully. Oct 8 20:15:14.335907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3-rootfs.mount: Deactivated successfully. Oct 8 20:15:14.358167 kubelet[2770]: I1008 20:15:14.356748 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:15:14.379932 kubelet[2770]: I1008 20:15:14.378796 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6f54f477d7-6lhqd" podStartSLOduration=2.705375019 podStartE2EDuration="5.378753165s" podCreationTimestamp="2024-10-08 20:15:09 +0000 UTC" firstStartedPulling="2024-10-08 20:15:09.822510546 +0000 UTC m=+23.727102266" lastFinishedPulling="2024-10-08 20:15:12.495888692 +0000 UTC m=+26.400480412" observedRunningTime="2024-10-08 20:15:13.362952466 +0000 UTC m=+27.267544146" watchObservedRunningTime="2024-10-08 20:15:14.378753165 +0000 UTC m=+28.283344885" Oct 8 20:15:14.423994 containerd[1477]: time="2024-10-08T20:15:14.423451826Z" level=info msg="shim disconnected" id=a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3 namespace=k8s.io Oct 8 20:15:14.424810 containerd[1477]: time="2024-10-08T20:15:14.424194687Z" level=warning msg="cleaning up after shim disconnected" id=a8e4f305389beb54169765dd251180ec2b66c3a94904243a0d93fd7bd339e9c3 namespace=k8s.io Oct 8 20:15:14.424810 containerd[1477]: time="2024-10-08T20:15:14.424218167Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:15:15.210365 kubelet[2770]: E1008 20:15:15.210238 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:15.364635 containerd[1477]: time="2024-10-08T20:15:15.364589924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 20:15:17.210676 kubelet[2770]: E1008 20:15:17.210298 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:19.210034 kubelet[2770]: E1008 20:15:19.209957 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:21.112575 containerd[1477]: time="2024-10-08T20:15:21.112518453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:21.113728 containerd[1477]: time="2024-10-08T20:15:21.113680764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 20:15:21.115218 containerd[1477]: time="2024-10-08T20:15:21.114293461Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:21.117209 containerd[1477]: time="2024-10-08T20:15:21.117176978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:21.118468 containerd[1477]: time="2024-10-08T20:15:21.118437132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 5.753800407s" Oct 8 20:15:21.118568 containerd[1477]: time="2024-10-08T20:15:21.118550935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 20:15:21.122914 containerd[1477]: time="2024-10-08T20:15:21.122816210Z" level=info msg="CreateContainer within sandbox \"f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 20:15:21.143129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584350591.mount: Deactivated successfully. Oct 8 20:15:21.144497 containerd[1477]: time="2024-10-08T20:15:21.144433430Z" level=info msg="CreateContainer within sandbox \"f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3\"" Oct 8 20:15:21.146397 containerd[1477]: time="2024-10-08T20:15:21.145299453Z" level=info msg="StartContainer for \"08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3\"" Oct 8 20:15:21.176490 systemd[1]: Started cri-containerd-08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3.scope - libcontainer container 08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3. Oct 8 20:15:21.212000 kubelet[2770]: E1008 20:15:21.211866 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:21.213796 containerd[1477]: time="2024-10-08T20:15:21.211952442Z" level=info msg="StartContainer for \"08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3\" returns successfully" Oct 8 20:15:21.683130 containerd[1477]: time="2024-10-08T20:15:21.683063524Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 20:15:21.690008 systemd[1]: cri-containerd-08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3.scope: Deactivated successfully. Oct 8 20:15:21.714898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3-rootfs.mount: Deactivated successfully. Oct 8 20:15:21.722971 kubelet[2770]: I1008 20:15:21.722947 2770 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 20:15:21.758075 kubelet[2770]: I1008 20:15:21.756956 2770 topology_manager.go:215] "Topology Admit Handler" podUID="46c2c0a1-3a28-46c7-9a83-9c80234de025" podNamespace="kube-system" podName="coredns-76f75df574-bfl8h" Oct 8 20:15:21.769719 systemd[1]: Created slice kubepods-burstable-pod46c2c0a1_3a28_46c7_9a83_9c80234de025.slice - libcontainer container kubepods-burstable-pod46c2c0a1_3a28_46c7_9a83_9c80234de025.slice. Oct 8 20:15:21.776356 kubelet[2770]: I1008 20:15:21.775110 2770 topology_manager.go:215] "Topology Admit Handler" podUID="002d6b53-3d7b-4c61-bc20-bf653c4f9b79" podNamespace="calico-system" podName="calico-kube-controllers-84ff6b6999-stszx" Oct 8 20:15:21.776356 kubelet[2770]: I1008 20:15:21.775286 2770 topology_manager.go:215] "Topology Admit Handler" podUID="b0a205fa-dab2-484e-9cb0-449d3a8666e8" podNamespace="kube-system" podName="coredns-76f75df574-94tfc" Oct 8 20:15:21.789634 systemd[1]: Created slice kubepods-burstable-podb0a205fa_dab2_484e_9cb0_449d3a8666e8.slice - libcontainer container kubepods-burstable-podb0a205fa_dab2_484e_9cb0_449d3a8666e8.slice. Oct 8 20:15:21.791444 kubelet[2770]: I1008 20:15:21.791371 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfp2v\" (UniqueName: \"kubernetes.io/projected/46c2c0a1-3a28-46c7-9a83-9c80234de025-kube-api-access-xfp2v\") pod \"coredns-76f75df574-bfl8h\" (UID: \"46c2c0a1-3a28-46c7-9a83-9c80234de025\") " pod="kube-system/coredns-76f75df574-bfl8h" Oct 8 20:15:21.791444 kubelet[2770]: I1008 20:15:21.791418 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/46c2c0a1-3a28-46c7-9a83-9c80234de025-config-volume\") pod \"coredns-76f75df574-bfl8h\" (UID: \"46c2c0a1-3a28-46c7-9a83-9c80234de025\") " pod="kube-system/coredns-76f75df574-bfl8h" Oct 8 20:15:21.799228 systemd[1]: Created slice kubepods-besteffort-pod002d6b53_3d7b_4c61_bc20_bf653c4f9b79.slice - libcontainer container kubepods-besteffort-pod002d6b53_3d7b_4c61_bc20_bf653c4f9b79.slice. Oct 8 20:15:21.822866 containerd[1477]: time="2024-10-08T20:15:21.822555587Z" level=info msg="shim disconnected" id=08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3 namespace=k8s.io Oct 8 20:15:21.822866 containerd[1477]: time="2024-10-08T20:15:21.822652830Z" level=warning msg="cleaning up after shim disconnected" id=08ef29d7fe6753f7d35c8a348a25b3d4666fdbca363ca0459b1d88ff047c3ad3 namespace=k8s.io Oct 8 20:15:21.822866 containerd[1477]: time="2024-10-08T20:15:21.822669030Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 20:15:21.891987 kubelet[2770]: I1008 20:15:21.891918 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a205fa-dab2-484e-9cb0-449d3a8666e8-config-volume\") pod \"coredns-76f75df574-94tfc\" (UID: \"b0a205fa-dab2-484e-9cb0-449d3a8666e8\") " pod="kube-system/coredns-76f75df574-94tfc" Oct 8 20:15:21.892171 kubelet[2770]: I1008 20:15:21.892032 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/002d6b53-3d7b-4c61-bc20-bf653c4f9b79-tigera-ca-bundle\") pod \"calico-kube-controllers-84ff6b6999-stszx\" (UID: \"002d6b53-3d7b-4c61-bc20-bf653c4f9b79\") " pod="calico-system/calico-kube-controllers-84ff6b6999-stszx" Oct 8 20:15:21.892171 kubelet[2770]: I1008 20:15:21.892079 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dsw\" (UniqueName: \"kubernetes.io/projected/002d6b53-3d7b-4c61-bc20-bf653c4f9b79-kube-api-access-t5dsw\") pod \"calico-kube-controllers-84ff6b6999-stszx\" (UID: \"002d6b53-3d7b-4c61-bc20-bf653c4f9b79\") " pod="calico-system/calico-kube-controllers-84ff6b6999-stszx" Oct 8 20:15:21.892171 kubelet[2770]: I1008 20:15:21.892150 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn5lz\" (UniqueName: \"kubernetes.io/projected/b0a205fa-dab2-484e-9cb0-449d3a8666e8-kube-api-access-zn5lz\") pod \"coredns-76f75df574-94tfc\" (UID: \"b0a205fa-dab2-484e-9cb0-449d3a8666e8\") " pod="kube-system/coredns-76f75df574-94tfc" Oct 8 20:15:22.077413 containerd[1477]: time="2024-10-08T20:15:22.077253017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bfl8h,Uid:46c2c0a1-3a28-46c7-9a83-9c80234de025,Namespace:kube-system,Attempt:0,}" Oct 8 20:15:22.109595 containerd[1477]: time="2024-10-08T20:15:22.107902318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84ff6b6999-stszx,Uid:002d6b53-3d7b-4c61-bc20-bf653c4f9b79,Namespace:calico-system,Attempt:0,}" Oct 8 20:15:22.110118 containerd[1477]: time="2024-10-08T20:15:22.109897211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-94tfc,Uid:b0a205fa-dab2-484e-9cb0-449d3a8666e8,Namespace:kube-system,Attempt:0,}" Oct 8 20:15:22.259415 containerd[1477]: time="2024-10-08T20:15:22.259265770Z" level=error msg="Failed to destroy network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.260399 containerd[1477]: time="2024-10-08T20:15:22.260200755Z" level=error msg="encountered an error cleaning up failed sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.260399 containerd[1477]: time="2024-10-08T20:15:22.260281717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bfl8h,Uid:46c2c0a1-3a28-46c7-9a83-9c80234de025,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.261917 kubelet[2770]: E1008 20:15:22.261523 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.261917 kubelet[2770]: E1008 20:15:22.261609 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bfl8h" Oct 8 20:15:22.261917 kubelet[2770]: E1008 20:15:22.261633 2770 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bfl8h" Oct 8 20:15:22.262296 kubelet[2770]: E1008 20:15:22.261688 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bfl8h_kube-system(46c2c0a1-3a28-46c7-9a83-9c80234de025)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bfl8h_kube-system(46c2c0a1-3a28-46c7-9a83-9c80234de025)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bfl8h" podUID="46c2c0a1-3a28-46c7-9a83-9c80234de025" Oct 8 20:15:22.263932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b-shm.mount: Deactivated successfully. Oct 8 20:15:22.277875 containerd[1477]: time="2024-10-08T20:15:22.277828467Z" level=error msg="Failed to destroy network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.278265 containerd[1477]: time="2024-10-08T20:15:22.277765425Z" level=error msg="Failed to destroy network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.278752 containerd[1477]: time="2024-10-08T20:15:22.278605328Z" level=error msg="encountered an error cleaning up failed sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.278752 containerd[1477]: time="2024-10-08T20:15:22.278668089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84ff6b6999-stszx,Uid:002d6b53-3d7b-4c61-bc20-bf653c4f9b79,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.282081 kubelet[2770]: E1008 20:15:22.279011 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.282081 kubelet[2770]: E1008 20:15:22.279064 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84ff6b6999-stszx" Oct 8 20:15:22.282081 kubelet[2770]: E1008 20:15:22.281720 2770 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84ff6b6999-stszx" Oct 8 20:15:22.282081 kubelet[2770]: E1008 20:15:22.281791 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.282296 containerd[1477]: time="2024-10-08T20:15:22.281532446Z" level=error msg="encountered an error cleaning up failed sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.282296 containerd[1477]: time="2024-10-08T20:15:22.281584288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-94tfc,Uid:b0a205fa-dab2-484e-9cb0-449d3a8666e8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.280798 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa-shm.mount: Deactivated successfully. Oct 8 20:15:22.284073 kubelet[2770]: E1008 20:15:22.281806 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84ff6b6999-stszx_calico-system(002d6b53-3d7b-4c61-bc20-bf653c4f9b79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84ff6b6999-stszx_calico-system(002d6b53-3d7b-4c61-bc20-bf653c4f9b79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84ff6b6999-stszx" podUID="002d6b53-3d7b-4c61-bc20-bf653c4f9b79" Oct 8 20:15:22.284073 kubelet[2770]: E1008 20:15:22.281835 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-94tfc" Oct 8 20:15:22.284073 kubelet[2770]: E1008 20:15:22.281859 2770 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-94tfc" Oct 8 20:15:22.280887 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef-shm.mount: Deactivated successfully. Oct 8 20:15:22.284245 kubelet[2770]: E1008 20:15:22.282507 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-94tfc_kube-system(b0a205fa-dab2-484e-9cb0-449d3a8666e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-94tfc_kube-system(b0a205fa-dab2-484e-9cb0-449d3a8666e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-94tfc" podUID="b0a205fa-dab2-484e-9cb0-449d3a8666e8" Oct 8 20:15:22.383884 kubelet[2770]: I1008 20:15:22.383849 2770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:22.384786 containerd[1477]: time="2024-10-08T20:15:22.384553604Z" level=info msg="StopPodSandbox for \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\"" Oct 8 20:15:22.386580 containerd[1477]: time="2024-10-08T20:15:22.386516537Z" level=info msg="Ensure that sandbox fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b in task-service has been cleanup successfully" Oct 8 20:15:22.391326 containerd[1477]: time="2024-10-08T20:15:22.391128740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 20:15:22.391820 kubelet[2770]: I1008 20:15:22.391588 2770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:22.392977 containerd[1477]: time="2024-10-08T20:15:22.392360573Z" level=info msg="StopPodSandbox for \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\"" Oct 8 20:15:22.392977 containerd[1477]: time="2024-10-08T20:15:22.392571099Z" level=info msg="Ensure that sandbox fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa in task-service has been cleanup successfully" Oct 8 20:15:22.398105 kubelet[2770]: I1008 20:15:22.398063 2770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:22.399104 containerd[1477]: time="2024-10-08T20:15:22.399054352Z" level=info msg="StopPodSandbox for \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\"" Oct 8 20:15:22.399308 containerd[1477]: time="2024-10-08T20:15:22.399282999Z" level=info msg="Ensure that sandbox 8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef in task-service has been cleanup successfully" Oct 8 20:15:22.450589 containerd[1477]: time="2024-10-08T20:15:22.450495250Z" level=error msg="StopPodSandbox for \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\" failed" error="failed to destroy network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.451087 kubelet[2770]: E1008 20:15:22.450802 2770 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:22.451087 kubelet[2770]: E1008 20:15:22.450882 2770 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b"} Oct 8 20:15:22.451087 kubelet[2770]: E1008 20:15:22.450917 2770 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46c2c0a1-3a28-46c7-9a83-9c80234de025\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:15:22.451087 kubelet[2770]: E1008 20:15:22.450945 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46c2c0a1-3a28-46c7-9a83-9c80234de025\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bfl8h" podUID="46c2c0a1-3a28-46c7-9a83-9c80234de025" Oct 8 20:15:22.460201 containerd[1477]: time="2024-10-08T20:15:22.460027505Z" level=error msg="StopPodSandbox for \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\" failed" error="failed to destroy network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.460856 kubelet[2770]: E1008 20:15:22.460705 2770 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:22.460856 kubelet[2770]: E1008 20:15:22.460749 2770 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef"} Oct 8 20:15:22.460856 kubelet[2770]: E1008 20:15:22.460796 2770 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"002d6b53-3d7b-4c61-bc20-bf653c4f9b79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:15:22.460856 kubelet[2770]: E1008 20:15:22.460825 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"002d6b53-3d7b-4c61-bc20-bf653c4f9b79\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84ff6b6999-stszx" podUID="002d6b53-3d7b-4c61-bc20-bf653c4f9b79" Oct 8 20:15:22.465073 containerd[1477]: time="2024-10-08T20:15:22.464645908Z" level=error msg="StopPodSandbox for \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\" failed" error="failed to destroy network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:22.465169 kubelet[2770]: E1008 20:15:22.464943 2770 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:22.465169 kubelet[2770]: E1008 20:15:22.464984 2770 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa"} Oct 8 20:15:22.465169 kubelet[2770]: E1008 20:15:22.465019 2770 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0a205fa-dab2-484e-9cb0-449d3a8666e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:15:22.465169 kubelet[2770]: E1008 20:15:22.465045 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0a205fa-dab2-484e-9cb0-449d3a8666e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-94tfc" podUID="b0a205fa-dab2-484e-9cb0-449d3a8666e8" Oct 8 20:15:22.631662 systemd[1]: Started sshd@11-49.13.72.235:22-121.142.87.218:51392.service - OpenSSH per-connection server daemon (121.142.87.218:51392). Oct 8 20:15:23.221030 systemd[1]: Created slice kubepods-besteffort-pod7004f452_d92e_454b_be56_1d3a59702cfb.slice - libcontainer container kubepods-besteffort-pod7004f452_d92e_454b_be56_1d3a59702cfb.slice. Oct 8 20:15:23.223840 containerd[1477]: time="2024-10-08T20:15:23.223790699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mkwvt,Uid:7004f452-d92e-454b-be56-1d3a59702cfb,Namespace:calico-system,Attempt:0,}" Oct 8 20:15:23.297791 containerd[1477]: time="2024-10-08T20:15:23.297700193Z" level=error msg="Failed to destroy network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:23.302348 containerd[1477]: time="2024-10-08T20:15:23.301683540Z" level=error msg="encountered an error cleaning up failed sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:23.302348 containerd[1477]: time="2024-10-08T20:15:23.301775542Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mkwvt,Uid:7004f452-d92e-454b-be56-1d3a59702cfb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:23.302609 kubelet[2770]: E1008 20:15:23.302065 2770 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:23.302609 kubelet[2770]: E1008 20:15:23.302128 2770 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mkwvt" Oct 8 20:15:23.302609 kubelet[2770]: E1008 20:15:23.302163 2770 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mkwvt" Oct 8 20:15:23.303244 kubelet[2770]: E1008 20:15:23.302235 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mkwvt_calico-system(7004f452-d92e-454b-be56-1d3a59702cfb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mkwvt_calico-system(7004f452-d92e-454b-be56-1d3a59702cfb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:23.303994 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6-shm.mount: Deactivated successfully. Oct 8 20:15:23.402935 kubelet[2770]: I1008 20:15:23.402887 2770 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:23.403871 containerd[1477]: time="2024-10-08T20:15:23.403827428Z" level=info msg="StopPodSandbox for \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\"" Oct 8 20:15:23.404917 containerd[1477]: time="2024-10-08T20:15:23.404615169Z" level=info msg="Ensure that sandbox d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6 in task-service has been cleanup successfully" Oct 8 20:15:23.432554 containerd[1477]: time="2024-10-08T20:15:23.432473473Z" level=error msg="StopPodSandbox for \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\" failed" error="failed to destroy network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 20:15:23.432747 kubelet[2770]: E1008 20:15:23.432725 2770 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:23.432838 kubelet[2770]: E1008 20:15:23.432768 2770 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6"} Oct 8 20:15:23.432838 kubelet[2770]: E1008 20:15:23.432806 2770 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7004f452-d92e-454b-be56-1d3a59702cfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 20:15:23.432838 kubelet[2770]: E1008 20:15:23.432833 2770 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7004f452-d92e-454b-be56-1d3a59702cfb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mkwvt" podUID="7004f452-d92e-454b-be56-1d3a59702cfb" Oct 8 20:15:24.218238 sshd[3644]: Invalid user mdk from 121.142.87.218 port 51392 Oct 8 20:15:24.520426 sshd[3644]: Received disconnect from 121.142.87.218 port 51392:11: Bye Bye [preauth] Oct 8 20:15:24.520426 sshd[3644]: Disconnected from invalid user mdk 121.142.87.218 port 51392 [preauth] Oct 8 20:15:24.522459 systemd[1]: sshd@11-49.13.72.235:22-121.142.87.218:51392.service: Deactivated successfully. Oct 8 20:15:28.923049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480010547.mount: Deactivated successfully. Oct 8 20:15:28.951346 containerd[1477]: time="2024-10-08T20:15:28.951266814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:28.952230 containerd[1477]: time="2024-10-08T20:15:28.952051154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 20:15:28.952997 containerd[1477]: time="2024-10-08T20:15:28.952896897Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:28.954846 containerd[1477]: time="2024-10-08T20:15:28.954798387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:28.955960 containerd[1477]: time="2024-10-08T20:15:28.955387723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.564215741s" Oct 8 20:15:28.955960 containerd[1477]: time="2024-10-08T20:15:28.955423723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 20:15:28.971597 containerd[1477]: time="2024-10-08T20:15:28.971557230Z" level=info msg="CreateContainer within sandbox \"f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 20:15:28.986751 containerd[1477]: time="2024-10-08T20:15:28.986708871Z" level=info msg="CreateContainer within sandbox \"f7a573d38c722912f3612cc3dceb487b2cc834b93cfe7ce9b05fb60121906748\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3ff2fb6f14414da34796899bbf6e78e866842b8c38188b87fbc8e871de58e778\"" Oct 8 20:15:28.988921 containerd[1477]: time="2024-10-08T20:15:28.988891528Z" level=info msg="StartContainer for \"3ff2fb6f14414da34796899bbf6e78e866842b8c38188b87fbc8e871de58e778\"" Oct 8 20:15:29.016511 systemd[1]: Started cri-containerd-3ff2fb6f14414da34796899bbf6e78e866842b8c38188b87fbc8e871de58e778.scope - libcontainer container 3ff2fb6f14414da34796899bbf6e78e866842b8c38188b87fbc8e871de58e778. Oct 8 20:15:29.054132 containerd[1477]: time="2024-10-08T20:15:29.052613530Z" level=info msg="StartContainer for \"3ff2fb6f14414da34796899bbf6e78e866842b8c38188b87fbc8e871de58e778\" returns successfully" Oct 8 20:15:29.196350 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 20:15:29.196554 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 20:15:29.239767 systemd[1]: Started sshd@12-49.13.72.235:22-27.254.149.199:44846.service - OpenSSH per-connection server daemon (27.254.149.199:44846). Oct 8 20:15:30.374369 sshd[3746]: Invalid user hyewonjeon from 27.254.149.199 port 44846 Oct 8 20:15:30.582411 sshd[3746]: Received disconnect from 27.254.149.199 port 44846:11: Bye Bye [preauth] Oct 8 20:15:30.582411 sshd[3746]: Disconnected from invalid user hyewonjeon 27.254.149.199 port 44846 [preauth] Oct 8 20:15:30.584646 systemd[1]: sshd@12-49.13.72.235:22-27.254.149.199:44846.service: Deactivated successfully. Oct 8 20:15:30.905138 kubelet[2770]: I1008 20:15:30.905044 2770 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 20:15:30.929651 kubelet[2770]: I1008 20:15:30.929595 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-55kvf" podStartSLOduration=2.837092032 podStartE2EDuration="21.929549926s" podCreationTimestamp="2024-10-08 20:15:09 +0000 UTC" firstStartedPulling="2024-10-08 20:15:09.863196156 +0000 UTC m=+23.767787876" lastFinishedPulling="2024-10-08 20:15:28.95565405 +0000 UTC m=+42.860245770" observedRunningTime="2024-10-08 20:15:29.450895398 +0000 UTC m=+43.355487118" watchObservedRunningTime="2024-10-08 20:15:30.929549926 +0000 UTC m=+44.834141646" Oct 8 20:15:32.018343 kernel: bpftool[3982]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 20:15:32.233522 systemd-networkd[1384]: vxlan.calico: Link UP Oct 8 20:15:32.233528 systemd-networkd[1384]: vxlan.calico: Gained carrier Oct 8 20:15:33.352557 systemd-networkd[1384]: vxlan.calico: Gained IPv6LL Oct 8 20:15:34.213388 containerd[1477]: time="2024-10-08T20:15:34.212839398Z" level=info msg="StopPodSandbox for \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\"" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.280 [INFO][4070] k8s.go 608: Cleaning up netns ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.281 [INFO][4070] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" iface="eth0" netns="/var/run/netns/cni-7ff5b9da-9ffd-ae8a-e8d6-8d4914cf9325" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.282 [INFO][4070] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" iface="eth0" netns="/var/run/netns/cni-7ff5b9da-9ffd-ae8a-e8d6-8d4914cf9325" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.282 [INFO][4070] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" iface="eth0" netns="/var/run/netns/cni-7ff5b9da-9ffd-ae8a-e8d6-8d4914cf9325" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.282 [INFO][4070] k8s.go 615: Releasing IP address(es) ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.282 [INFO][4070] utils.go 188: Calico CNI releasing IP address ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.345 [INFO][4077] ipam_plugin.go 417: Releasing address using handleID ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.345 [INFO][4077] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.345 [INFO][4077] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.366 [WARNING][4077] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.366 [INFO][4077] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.368 [INFO][4077] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:34.374237 containerd[1477]: 2024-10-08 20:15:34.372 [INFO][4070] k8s.go 621: Teardown processing complete. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:34.374774 containerd[1477]: time="2024-10-08T20:15:34.374471386Z" level=info msg="TearDown network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\" successfully" Oct 8 20:15:34.374774 containerd[1477]: time="2024-10-08T20:15:34.374497747Z" level=info msg="StopPodSandbox for \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\" returns successfully" Oct 8 20:15:34.378236 systemd[1]: run-netns-cni\x2d7ff5b9da\x2d9ffd\x2dae8a\x2de8d6\x2d8d4914cf9325.mount: Deactivated successfully. Oct 8 20:15:34.382891 containerd[1477]: time="2024-10-08T20:15:34.382761443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bfl8h,Uid:46c2c0a1-3a28-46c7-9a83-9c80234de025,Namespace:kube-system,Attempt:1,}" Oct 8 20:15:34.531697 systemd-networkd[1384]: cali8afa0a85484: Link UP Oct 8 20:15:34.531921 systemd-networkd[1384]: cali8afa0a85484: Gained carrier Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.439 [INFO][4084] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0 coredns-76f75df574- kube-system 46c2c0a1-3a28-46c7-9a83-9c80234de025 687 0 2024-10-08 20:15:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-1-c965454201 coredns-76f75df574-bfl8h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8afa0a85484 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.440 [INFO][4084] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.475 [INFO][4095] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" HandleID="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.491 [INFO][4095] ipam_plugin.go 270: Auto assigning IP ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" HandleID="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-1-c965454201", "pod":"coredns-76f75df574-bfl8h", "timestamp":"2024-10-08 20:15:34.4751777 +0000 UTC"}, Hostname:"ci-3975-2-2-1-c965454201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.491 [INFO][4095] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.491 [INFO][4095] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.491 [INFO][4095] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-1-c965454201' Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.493 [INFO][4095] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.499 [INFO][4095] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.504 [INFO][4095] ipam.go 489: Trying affinity for 192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.506 [INFO][4095] ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.509 [INFO][4095] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.511 [INFO][4095] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.512 [INFO][4095] ipam.go 1685: Creating new handle: k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340 Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.517 [INFO][4095] ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.523 [INFO][4095] ipam.go 1216: Successfully claimed IPs: [192.168.86.1/26] block=192.168.86.0/26 handle="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.523 [INFO][4095] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.1/26] handle="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.523 [INFO][4095] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:34.549259 containerd[1477]: 2024-10-08 20:15:34.523 [INFO][4095] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.86.1/26] IPv6=[] ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" HandleID="k8s-pod-network.d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.549835 containerd[1477]: 2024-10-08 20:15:34.525 [INFO][4084] k8s.go 386: Populated endpoint ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"46c2c0a1-3a28-46c7-9a83-9c80234de025", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"", Pod:"coredns-76f75df574-bfl8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8afa0a85484", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:34.549835 containerd[1477]: 2024-10-08 20:15:34.526 [INFO][4084] k8s.go 387: Calico CNI using IPs: [192.168.86.1/32] ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.549835 containerd[1477]: 2024-10-08 20:15:34.526 [INFO][4084] dataplane_linux.go 68: Setting the host side veth name to cali8afa0a85484 ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.549835 containerd[1477]: 2024-10-08 20:15:34.530 [INFO][4084] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.549835 containerd[1477]: 2024-10-08 20:15:34.532 [INFO][4084] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"46c2c0a1-3a28-46c7-9a83-9c80234de025", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340", Pod:"coredns-76f75df574-bfl8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8afa0a85484", MAC:"3a:61:18:a1:a5:cf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:34.549835 containerd[1477]: 2024-10-08 20:15:34.544 [INFO][4084] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340" Namespace="kube-system" Pod="coredns-76f75df574-bfl8h" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:34.576774 containerd[1477]: time="2024-10-08T20:15:34.576540751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:34.576774 containerd[1477]: time="2024-10-08T20:15:34.576587232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:34.576774 containerd[1477]: time="2024-10-08T20:15:34.576600713Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:34.576774 containerd[1477]: time="2024-10-08T20:15:34.576609793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:34.600513 systemd[1]: Started cri-containerd-d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340.scope - libcontainer container d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340. Oct 8 20:15:34.639276 containerd[1477]: time="2024-10-08T20:15:34.639191830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bfl8h,Uid:46c2c0a1-3a28-46c7-9a83-9c80234de025,Namespace:kube-system,Attempt:1,} returns sandbox id \"d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340\"" Oct 8 20:15:34.641875 containerd[1477]: time="2024-10-08T20:15:34.641840739Z" level=info msg="CreateContainer within sandbox \"d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:15:34.664030 containerd[1477]: time="2024-10-08T20:15:34.663983278Z" level=info msg="CreateContainer within sandbox \"d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c31d9fb835484ff425659f4fc149e00c6e6a23cbbaa9b5aa1fb40bbb546b94ca\"" Oct 8 20:15:34.664826 containerd[1477]: time="2024-10-08T20:15:34.664706137Z" level=info msg="StartContainer for \"c31d9fb835484ff425659f4fc149e00c6e6a23cbbaa9b5aa1fb40bbb546b94ca\"" Oct 8 20:15:34.691547 systemd[1]: Started cri-containerd-c31d9fb835484ff425659f4fc149e00c6e6a23cbbaa9b5aa1fb40bbb546b94ca.scope - libcontainer container c31d9fb835484ff425659f4fc149e00c6e6a23cbbaa9b5aa1fb40bbb546b94ca. Oct 8 20:15:34.723333 containerd[1477]: time="2024-10-08T20:15:34.723226708Z" level=info msg="StartContainer for \"c31d9fb835484ff425659f4fc149e00c6e6a23cbbaa9b5aa1fb40bbb546b94ca\" returns successfully" Oct 8 20:15:35.210697 containerd[1477]: time="2024-10-08T20:15:35.210630087Z" level=info msg="StopPodSandbox for \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\"" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.265 [INFO][4209] k8s.go 608: Cleaning up netns ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.265 [INFO][4209] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" iface="eth0" netns="/var/run/netns/cni-f6daf8e1-a2a9-43ff-e53f-16aff2a47811" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.268 [INFO][4209] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" iface="eth0" netns="/var/run/netns/cni-f6daf8e1-a2a9-43ff-e53f-16aff2a47811" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.269 [INFO][4209] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" iface="eth0" netns="/var/run/netns/cni-f6daf8e1-a2a9-43ff-e53f-16aff2a47811" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.269 [INFO][4209] k8s.go 615: Releasing IP address(es) ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.269 [INFO][4209] utils.go 188: Calico CNI releasing IP address ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.291 [INFO][4215] ipam_plugin.go 417: Releasing address using handleID ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.291 [INFO][4215] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.291 [INFO][4215] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.304 [WARNING][4215] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.304 [INFO][4215] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.306 [INFO][4215] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:35.309606 containerd[1477]: 2024-10-08 20:15:35.307 [INFO][4209] k8s.go 621: Teardown processing complete. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:35.310440 containerd[1477]: time="2024-10-08T20:15:35.309715475Z" level=info msg="TearDown network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\" successfully" Oct 8 20:15:35.310440 containerd[1477]: time="2024-10-08T20:15:35.309745276Z" level=info msg="StopPodSandbox for \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\" returns successfully" Oct 8 20:15:35.311948 containerd[1477]: time="2024-10-08T20:15:35.311476201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84ff6b6999-stszx,Uid:002d6b53-3d7b-4c61-bc20-bf653c4f9b79,Namespace:calico-system,Attempt:1,}" Oct 8 20:15:35.381670 systemd[1]: run-netns-cni\x2df6daf8e1\x2da2a9\x2d43ff\x2de53f\x2d16aff2a47811.mount: Deactivated successfully. Oct 8 20:15:35.460724 systemd-networkd[1384]: calicfef2b149c9: Link UP Oct 8 20:15:35.462929 systemd-networkd[1384]: calicfef2b149c9: Gained carrier Oct 8 20:15:35.486131 kubelet[2770]: I1008 20:15:35.485848 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bfl8h" podStartSLOduration=34.485800073 podStartE2EDuration="34.485800073s" podCreationTimestamp="2024-10-08 20:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:15:35.469823776 +0000 UTC m=+49.374415496" watchObservedRunningTime="2024-10-08 20:15:35.485800073 +0000 UTC m=+49.390391793" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.369 [INFO][4221] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0 calico-kube-controllers-84ff6b6999- calico-system 002d6b53-3d7b-4c61-bc20-bf653c4f9b79 697 0 2024-10-08 20:15:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84ff6b6999 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975-2-2-1-c965454201 calico-kube-controllers-84ff6b6999-stszx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicfef2b149c9 [] []}} ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.369 [INFO][4221] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.407 [INFO][4232] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" HandleID="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.420 [INFO][4232] ipam_plugin.go 270: Auto assigning IP ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" HandleID="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-1-c965454201", "pod":"calico-kube-controllers-84ff6b6999-stszx", "timestamp":"2024-10-08 20:15:35.407696113 +0000 UTC"}, Hostname:"ci-3975-2-2-1-c965454201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.420 [INFO][4232] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.420 [INFO][4232] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.420 [INFO][4232] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-1-c965454201' Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.422 [INFO][4232] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.427 [INFO][4232] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.432 [INFO][4232] ipam.go 489: Trying affinity for 192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.434 [INFO][4232] ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.438 [INFO][4232] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.438 [INFO][4232] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.440 [INFO][4232] ipam.go 1685: Creating new handle: k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7 Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.445 [INFO][4232] ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.453 [INFO][4232] ipam.go 1216: Successfully claimed IPs: [192.168.86.2/26] block=192.168.86.0/26 handle="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.453 [INFO][4232] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.2/26] handle="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.453 [INFO][4232] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:35.490239 containerd[1477]: 2024-10-08 20:15:35.454 [INFO][4232] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.86.2/26] IPv6=[] ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" HandleID="k8s-pod-network.e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.491154 containerd[1477]: 2024-10-08 20:15:35.456 [INFO][4221] k8s.go 386: Populated endpoint ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0", GenerateName:"calico-kube-controllers-84ff6b6999-", Namespace:"calico-system", SelfLink:"", UID:"002d6b53-3d7b-4c61-bc20-bf653c4f9b79", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84ff6b6999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"", Pod:"calico-kube-controllers-84ff6b6999-stszx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicfef2b149c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:35.491154 containerd[1477]: 2024-10-08 20:15:35.456 [INFO][4221] k8s.go 387: Calico CNI using IPs: [192.168.86.2/32] ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.491154 containerd[1477]: 2024-10-08 20:15:35.456 [INFO][4221] dataplane_linux.go 68: Setting the host side veth name to calicfef2b149c9 ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.491154 containerd[1477]: 2024-10-08 20:15:35.462 [INFO][4221] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.491154 containerd[1477]: 2024-10-08 20:15:35.463 [INFO][4221] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0", GenerateName:"calico-kube-controllers-84ff6b6999-", Namespace:"calico-system", SelfLink:"", UID:"002d6b53-3d7b-4c61-bc20-bf653c4f9b79", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84ff6b6999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7", Pod:"calico-kube-controllers-84ff6b6999-stszx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicfef2b149c9", MAC:"46:6c:b8:ae:66:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:35.491154 containerd[1477]: 2024-10-08 20:15:35.487 [INFO][4221] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7" Namespace="calico-system" Pod="calico-kube-controllers-84ff6b6999-stszx" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:35.528047 containerd[1477]: time="2024-10-08T20:15:35.527587284Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:35.528047 containerd[1477]: time="2024-10-08T20:15:35.527659166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:35.528047 containerd[1477]: time="2024-10-08T20:15:35.527692007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:35.528047 containerd[1477]: time="2024-10-08T20:15:35.527706447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:35.561920 systemd[1]: Started cri-containerd-e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7.scope - libcontainer container e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7. Oct 8 20:15:35.599145 containerd[1477]: time="2024-10-08T20:15:35.599100592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84ff6b6999-stszx,Uid:002d6b53-3d7b-4c61-bc20-bf653c4f9b79,Namespace:calico-system,Attempt:1,} returns sandbox id \"e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7\"" Oct 8 20:15:35.616103 containerd[1477]: time="2024-10-08T20:15:35.615845949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 20:15:36.424780 systemd-networkd[1384]: cali8afa0a85484: Gained IPv6LL Oct 8 20:15:36.744766 systemd-networkd[1384]: calicfef2b149c9: Gained IPv6LL Oct 8 20:15:37.210660 containerd[1477]: time="2024-10-08T20:15:37.210579017Z" level=info msg="StopPodSandbox for \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\"" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.266 [INFO][4312] k8s.go 608: Cleaning up netns ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.268 [INFO][4312] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" iface="eth0" netns="/var/run/netns/cni-e31b2eb7-5f71-54e5-8d13-00171fa478f8" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.268 [INFO][4312] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" iface="eth0" netns="/var/run/netns/cni-e31b2eb7-5f71-54e5-8d13-00171fa478f8" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.270 [INFO][4312] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" iface="eth0" netns="/var/run/netns/cni-e31b2eb7-5f71-54e5-8d13-00171fa478f8" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.270 [INFO][4312] k8s.go 615: Releasing IP address(es) ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.270 [INFO][4312] utils.go 188: Calico CNI releasing IP address ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.290 [INFO][4318] ipam_plugin.go 417: Releasing address using handleID ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.290 [INFO][4318] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.290 [INFO][4318] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.303 [WARNING][4318] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.303 [INFO][4318] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.305 [INFO][4318] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:37.310082 containerd[1477]: 2024-10-08 20:15:37.307 [INFO][4312] k8s.go 621: Teardown processing complete. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:37.311727 containerd[1477]: time="2024-10-08T20:15:37.310426016Z" level=info msg="TearDown network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\" successfully" Oct 8 20:15:37.311727 containerd[1477]: time="2024-10-08T20:15:37.310459017Z" level=info msg="StopPodSandbox for \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\" returns successfully" Oct 8 20:15:37.312789 systemd[1]: run-netns-cni\x2de31b2eb7\x2d5f71\x2d54e5\x2d8d13\x2d00171fa478f8.mount: Deactivated successfully. Oct 8 20:15:37.313254 containerd[1477]: time="2024-10-08T20:15:37.313201529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-94tfc,Uid:b0a205fa-dab2-484e-9cb0-449d3a8666e8,Namespace:kube-system,Attempt:1,}" Oct 8 20:15:37.473080 systemd-networkd[1384]: calid349afedd36: Link UP Oct 8 20:15:37.474085 systemd-networkd[1384]: calid349afedd36: Gained carrier Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.375 [INFO][4325] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0 coredns-76f75df574- kube-system b0a205fa-dab2-484e-9cb0-449d3a8666e8 713 0 2024-10-08 20:15:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975-2-2-1-c965454201 coredns-76f75df574-94tfc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid349afedd36 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.375 [INFO][4325] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.404 [INFO][4335] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" HandleID="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.417 [INFO][4335] ipam_plugin.go 270: Auto assigning IP ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" HandleID="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289d70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975-2-2-1-c965454201", "pod":"coredns-76f75df574-94tfc", "timestamp":"2024-10-08 20:15:37.404166177 +0000 UTC"}, Hostname:"ci-3975-2-2-1-c965454201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.417 [INFO][4335] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.418 [INFO][4335] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.418 [INFO][4335] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-1-c965454201' Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.420 [INFO][4335] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.429 [INFO][4335] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.435 [INFO][4335] ipam.go 489: Trying affinity for 192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.438 [INFO][4335] ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.440 [INFO][4335] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.441 [INFO][4335] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.443 [INFO][4335] ipam.go 1685: Creating new handle: k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.451 [INFO][4335] ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.462 [INFO][4335] ipam.go 1216: Successfully claimed IPs: [192.168.86.3/26] block=192.168.86.0/26 handle="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.463 [INFO][4335] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.3/26] handle="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.463 [INFO][4335] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:37.502466 containerd[1477]: 2024-10-08 20:15:37.463 [INFO][4335] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.86.3/26] IPv6=[] ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" HandleID="k8s-pod-network.ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.502988 containerd[1477]: 2024-10-08 20:15:37.468 [INFO][4325] k8s.go 386: Populated endpoint ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b0a205fa-dab2-484e-9cb0-449d3a8666e8", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"", Pod:"coredns-76f75df574-94tfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid349afedd36", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:37.502988 containerd[1477]: 2024-10-08 20:15:37.468 [INFO][4325] k8s.go 387: Calico CNI using IPs: [192.168.86.3/32] ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.502988 containerd[1477]: 2024-10-08 20:15:37.468 [INFO][4325] dataplane_linux.go 68: Setting the host side veth name to calid349afedd36 ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.502988 containerd[1477]: 2024-10-08 20:15:37.474 [INFO][4325] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.502988 containerd[1477]: 2024-10-08 20:15:37.477 [INFO][4325] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b0a205fa-dab2-484e-9cb0-449d3a8666e8", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b", Pod:"coredns-76f75df574-94tfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid349afedd36", MAC:"b6:e9:4b:39:e8:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:37.502988 containerd[1477]: 2024-10-08 20:15:37.498 [INFO][4325] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b" Namespace="kube-system" Pod="coredns-76f75df574-94tfc" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:37.533232 containerd[1477]: time="2024-10-08T20:15:37.532354194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:37.533232 containerd[1477]: time="2024-10-08T20:15:37.532443957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:37.533232 containerd[1477]: time="2024-10-08T20:15:37.532468397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:37.533232 containerd[1477]: time="2024-10-08T20:15:37.532483398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:37.563682 systemd[1]: Started cri-containerd-ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b.scope - libcontainer container ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b. Oct 8 20:15:37.610596 containerd[1477]: time="2024-10-08T20:15:37.610474068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-94tfc,Uid:b0a205fa-dab2-484e-9cb0-449d3a8666e8,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b\"" Oct 8 20:15:37.613741 containerd[1477]: time="2024-10-08T20:15:37.613645631Z" level=info msg="CreateContainer within sandbox \"ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 20:15:37.635962 containerd[1477]: time="2024-10-08T20:15:37.635902610Z" level=info msg="CreateContainer within sandbox \"ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dce90fbf2e8949f69f0c8d314275e639f32a638cd6ba10c77193b804a1a8b619\"" Oct 8 20:15:37.637525 containerd[1477]: time="2024-10-08T20:15:37.637474371Z" level=info msg="StartContainer for \"dce90fbf2e8949f69f0c8d314275e639f32a638cd6ba10c77193b804a1a8b619\"" Oct 8 20:15:37.670509 systemd[1]: Started cri-containerd-dce90fbf2e8949f69f0c8d314275e639f32a638cd6ba10c77193b804a1a8b619.scope - libcontainer container dce90fbf2e8949f69f0c8d314275e639f32a638cd6ba10c77193b804a1a8b619. Oct 8 20:15:37.707965 containerd[1477]: time="2024-10-08T20:15:37.707914605Z" level=info msg="StartContainer for \"dce90fbf2e8949f69f0c8d314275e639f32a638cd6ba10c77193b804a1a8b619\" returns successfully" Oct 8 20:15:38.212835 containerd[1477]: time="2024-10-08T20:15:38.212447932Z" level=info msg="StopPodSandbox for \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\"" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.292 [INFO][4453] k8s.go 608: Cleaning up netns ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.293 [INFO][4453] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" iface="eth0" netns="/var/run/netns/cni-f70743f3-b4b9-1600-951d-1cafd33bc046" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.293 [INFO][4453] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" iface="eth0" netns="/var/run/netns/cni-f70743f3-b4b9-1600-951d-1cafd33bc046" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.293 [INFO][4453] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" iface="eth0" netns="/var/run/netns/cni-f70743f3-b4b9-1600-951d-1cafd33bc046" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.294 [INFO][4453] k8s.go 615: Releasing IP address(es) ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.294 [INFO][4453] utils.go 188: Calico CNI releasing IP address ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.333 [INFO][4463] ipam_plugin.go 417: Releasing address using handleID ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.334 [INFO][4463] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.335 [INFO][4463] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.347 [WARNING][4463] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.347 [INFO][4463] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.348 [INFO][4463] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:38.353267 containerd[1477]: 2024-10-08 20:15:38.351 [INFO][4453] k8s.go 621: Teardown processing complete. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:38.355742 containerd[1477]: time="2024-10-08T20:15:38.355489331Z" level=info msg="TearDown network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\" successfully" Oct 8 20:15:38.355742 containerd[1477]: time="2024-10-08T20:15:38.355555533Z" level=info msg="StopPodSandbox for \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\" returns successfully" Oct 8 20:15:38.355550 systemd[1]: run-netns-cni\x2df70743f3\x2db4b9\x2d1600\x2d951d\x2d1cafd33bc046.mount: Deactivated successfully. Oct 8 20:15:38.364403 containerd[1477]: time="2024-10-08T20:15:38.363631583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mkwvt,Uid:7004f452-d92e-454b-be56-1d3a59702cfb,Namespace:calico-system,Attempt:1,}" Oct 8 20:15:38.533329 kubelet[2770]: I1008 20:15:38.531568 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-94tfc" podStartSLOduration=37.531522187 podStartE2EDuration="37.531522187s" podCreationTimestamp="2024-10-08 20:15:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 20:15:38.531352103 +0000 UTC m=+52.435943823" watchObservedRunningTime="2024-10-08 20:15:38.531522187 +0000 UTC m=+52.436113867" Oct 8 20:15:38.619521 systemd-networkd[1384]: cali892c3cbe488: Link UP Oct 8 20:15:38.620957 systemd-networkd[1384]: cali892c3cbe488: Gained carrier Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.440 [INFO][4469] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0 csi-node-driver- calico-system 7004f452-d92e-454b-be56-1d3a59702cfb 724 0 2024-10-08 20:15:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975-2-2-1-c965454201 csi-node-driver-mkwvt eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali892c3cbe488 [] []}} ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.440 [INFO][4469] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.518 [INFO][4482] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" HandleID="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.546 [INFO][4482] ipam_plugin.go 270: Auto assigning IP ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" HandleID="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030cfa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975-2-2-1-c965454201", "pod":"csi-node-driver-mkwvt", "timestamp":"2024-10-08 20:15:38.518861978 +0000 UTC"}, Hostname:"ci-3975-2-2-1-c965454201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.547 [INFO][4482] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.547 [INFO][4482] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.547 [INFO][4482] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-1-c965454201' Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.552 [INFO][4482] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.561 [INFO][4482] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.570 [INFO][4482] ipam.go 489: Trying affinity for 192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.577 [INFO][4482] ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.582 [INFO][4482] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.583 [INFO][4482] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.587 [INFO][4482] ipam.go 1685: Creating new handle: k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.594 [INFO][4482] ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.604 [INFO][4482] ipam.go 1216: Successfully claimed IPs: [192.168.86.4/26] block=192.168.86.0/26 handle="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.604 [INFO][4482] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.4/26] handle="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.605 [INFO][4482] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:38.645066 containerd[1477]: 2024-10-08 20:15:38.605 [INFO][4482] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.86.4/26] IPv6=[] ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" HandleID="k8s-pod-network.3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.645654 containerd[1477]: 2024-10-08 20:15:38.611 [INFO][4469] k8s.go 386: Populated endpoint ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7004f452-d92e-454b-be56-1d3a59702cfb", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"", Pod:"csi-node-driver-mkwvt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali892c3cbe488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:38.645654 containerd[1477]: 2024-10-08 20:15:38.612 [INFO][4469] k8s.go 387: Calico CNI using IPs: [192.168.86.4/32] ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.645654 containerd[1477]: 2024-10-08 20:15:38.612 [INFO][4469] dataplane_linux.go 68: Setting the host side veth name to cali892c3cbe488 ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.645654 containerd[1477]: 2024-10-08 20:15:38.621 [INFO][4469] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.645654 containerd[1477]: 2024-10-08 20:15:38.623 [INFO][4469] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7004f452-d92e-454b-be56-1d3a59702cfb", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e", Pod:"csi-node-driver-mkwvt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali892c3cbe488", MAC:"ba:3f:35:68:63:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:38.645654 containerd[1477]: 2024-10-08 20:15:38.641 [INFO][4469] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e" Namespace="calico-system" Pod="csi-node-driver-mkwvt" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:38.688952 containerd[1477]: time="2024-10-08T20:15:38.686902467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:38.688952 containerd[1477]: time="2024-10-08T20:15:38.686959148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:38.688952 containerd[1477]: time="2024-10-08T20:15:38.686977549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:38.688952 containerd[1477]: time="2024-10-08T20:15:38.686991269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:38.721651 systemd[1]: Started cri-containerd-3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e.scope - libcontainer container 3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e. Oct 8 20:15:38.768343 containerd[1477]: time="2024-10-08T20:15:38.768186860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mkwvt,Uid:7004f452-d92e-454b-be56-1d3a59702cfb,Namespace:calico-system,Attempt:1,} returns sandbox id \"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e\"" Oct 8 20:15:38.856484 systemd-networkd[1384]: calid349afedd36: Gained IPv6LL Oct 8 20:15:38.994654 containerd[1477]: time="2024-10-08T20:15:38.994347259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:38.995640 containerd[1477]: time="2024-10-08T20:15:38.995463248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 20:15:38.996701 containerd[1477]: time="2024-10-08T20:15:38.996639399Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:38.998882 containerd[1477]: time="2024-10-08T20:15:38.998830936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:39.000090 containerd[1477]: time="2024-10-08T20:15:38.999574035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 3.383651404s" Oct 8 20:15:39.000090 containerd[1477]: time="2024-10-08T20:15:38.999608916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 20:15:39.003149 containerd[1477]: time="2024-10-08T20:15:39.003114047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 20:15:39.009253 containerd[1477]: time="2024-10-08T20:15:39.009196285Z" level=info msg="CreateContainer within sandbox \"e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 20:15:39.045871 containerd[1477]: time="2024-10-08T20:15:39.045795795Z" level=info msg="CreateContainer within sandbox \"e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f\"" Oct 8 20:15:39.047842 containerd[1477]: time="2024-10-08T20:15:39.046723259Z" level=info msg="StartContainer for \"e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f\"" Oct 8 20:15:39.079501 systemd[1]: Started cri-containerd-e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f.scope - libcontainer container e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f. Oct 8 20:15:39.117382 containerd[1477]: time="2024-10-08T20:15:39.117176488Z" level=info msg="StartContainer for \"e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f\" returns successfully" Oct 8 20:15:39.527199 systemd[1]: run-containerd-runc-k8s.io-e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f-runc.652LQC.mount: Deactivated successfully. Oct 8 20:15:39.545095 kubelet[2770]: I1008 20:15:39.545030 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84ff6b6999-stszx" podStartSLOduration=27.145316812 podStartE2EDuration="30.544981834s" podCreationTimestamp="2024-10-08 20:15:09 +0000 UTC" firstStartedPulling="2024-10-08 20:15:35.600434547 +0000 UTC m=+49.505026227" lastFinishedPulling="2024-10-08 20:15:39.000099529 +0000 UTC m=+52.904691249" observedRunningTime="2024-10-08 20:15:39.521587187 +0000 UTC m=+53.426178907" watchObservedRunningTime="2024-10-08 20:15:39.544981834 +0000 UTC m=+53.449573554" Oct 8 20:15:40.584661 systemd-networkd[1384]: cali892c3cbe488: Gained IPv6LL Oct 8 20:15:40.891716 containerd[1477]: time="2024-10-08T20:15:40.891575760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:40.893064 containerd[1477]: time="2024-10-08T20:15:40.892867154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 20:15:40.894284 containerd[1477]: time="2024-10-08T20:15:40.894076665Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:40.896202 containerd[1477]: time="2024-10-08T20:15:40.896173160Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:40.897005 containerd[1477]: time="2024-10-08T20:15:40.896979181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.893829212s" Oct 8 20:15:40.897131 containerd[1477]: time="2024-10-08T20:15:40.897113464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 20:15:40.903736 containerd[1477]: time="2024-10-08T20:15:40.902924975Z" level=info msg="CreateContainer within sandbox \"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 20:15:40.933695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029471182.mount: Deactivated successfully. Oct 8 20:15:40.940621 containerd[1477]: time="2024-10-08T20:15:40.940568711Z" level=info msg="CreateContainer within sandbox \"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d6e6dded468fcbe4ba034f86ad413f0d21f1ffca7fbcf182a9d3d56816ced36\"" Oct 8 20:15:40.942437 containerd[1477]: time="2024-10-08T20:15:40.941272729Z" level=info msg="StartContainer for \"5d6e6dded468fcbe4ba034f86ad413f0d21f1ffca7fbcf182a9d3d56816ced36\"" Oct 8 20:15:40.989501 systemd[1]: Started cri-containerd-5d6e6dded468fcbe4ba034f86ad413f0d21f1ffca7fbcf182a9d3d56816ced36.scope - libcontainer container 5d6e6dded468fcbe4ba034f86ad413f0d21f1ffca7fbcf182a9d3d56816ced36. Oct 8 20:15:41.034100 containerd[1477]: time="2024-10-08T20:15:41.033630722Z" level=info msg="StartContainer for \"5d6e6dded468fcbe4ba034f86ad413f0d21f1ffca7fbcf182a9d3d56816ced36\" returns successfully" Oct 8 20:15:41.035335 containerd[1477]: time="2024-10-08T20:15:41.035285005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 20:15:43.208462 containerd[1477]: time="2024-10-08T20:15:43.208371940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:43.209956 containerd[1477]: time="2024-10-08T20:15:43.209635933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 20:15:43.210214 containerd[1477]: time="2024-10-08T20:15:43.210188587Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:43.215022 containerd[1477]: time="2024-10-08T20:15:43.214944590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:43.217924 containerd[1477]: time="2024-10-08T20:15:43.217506696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 2.182011006s" Oct 8 20:15:43.217924 containerd[1477]: time="2024-10-08T20:15:43.217599939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 20:15:43.221366 containerd[1477]: time="2024-10-08T20:15:43.221338035Z" level=info msg="CreateContainer within sandbox \"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 20:15:43.241705 containerd[1477]: time="2024-10-08T20:15:43.241642120Z" level=info msg="CreateContainer within sandbox \"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"246ca9c50528a7675d5a7322832906e8da755a82893093e61d4c4292175f259b\"" Oct 8 20:15:43.242637 containerd[1477]: time="2024-10-08T20:15:43.242559303Z" level=info msg="StartContainer for \"246ca9c50528a7675d5a7322832906e8da755a82893093e61d4c4292175f259b\"" Oct 8 20:15:43.288957 systemd[1]: run-containerd-runc-k8s.io-246ca9c50528a7675d5a7322832906e8da755a82893093e61d4c4292175f259b-runc.r491vJ.mount: Deactivated successfully. Oct 8 20:15:43.298590 systemd[1]: Started cri-containerd-246ca9c50528a7675d5a7322832906e8da755a82893093e61d4c4292175f259b.scope - libcontainer container 246ca9c50528a7675d5a7322832906e8da755a82893093e61d4c4292175f259b. Oct 8 20:15:43.345318 containerd[1477]: time="2024-10-08T20:15:43.344155567Z" level=info msg="StartContainer for \"246ca9c50528a7675d5a7322832906e8da755a82893093e61d4c4292175f259b\" returns successfully" Oct 8 20:15:44.379890 kubelet[2770]: I1008 20:15:44.379404 2770 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 20:15:44.379890 kubelet[2770]: I1008 20:15:44.379450 2770 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 20:15:46.248780 containerd[1477]: time="2024-10-08T20:15:46.248669381Z" level=info msg="StopPodSandbox for \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\"" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.295 [WARNING][4712] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0", GenerateName:"calico-kube-controllers-84ff6b6999-", Namespace:"calico-system", SelfLink:"", UID:"002d6b53-3d7b-4c61-bc20-bf653c4f9b79", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84ff6b6999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7", Pod:"calico-kube-controllers-84ff6b6999-stszx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicfef2b149c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.295 [INFO][4712] k8s.go 608: Cleaning up netns ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.295 [INFO][4712] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" iface="eth0" netns="" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.295 [INFO][4712] k8s.go 615: Releasing IP address(es) ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.295 [INFO][4712] utils.go 188: Calico CNI releasing IP address ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.325 [INFO][4719] ipam_plugin.go 417: Releasing address using handleID ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.326 [INFO][4719] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.326 [INFO][4719] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.339 [WARNING][4719] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.339 [INFO][4719] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.342 [INFO][4719] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:46.344952 containerd[1477]: 2024-10-08 20:15:46.343 [INFO][4712] k8s.go 621: Teardown processing complete. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.344952 containerd[1477]: time="2024-10-08T20:15:46.344696612Z" level=info msg="TearDown network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\" successfully" Oct 8 20:15:46.344952 containerd[1477]: time="2024-10-08T20:15:46.344724133Z" level=info msg="StopPodSandbox for \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\" returns successfully" Oct 8 20:15:46.345863 containerd[1477]: time="2024-10-08T20:15:46.345757439Z" level=info msg="RemovePodSandbox for \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\"" Oct 8 20:15:46.357390 containerd[1477]: time="2024-10-08T20:15:46.345789840Z" level=info msg="Forcibly stopping sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\"" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.414 [WARNING][4738] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0", GenerateName:"calico-kube-controllers-84ff6b6999-", Namespace:"calico-system", SelfLink:"", UID:"002d6b53-3d7b-4c61-bc20-bf653c4f9b79", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84ff6b6999", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"e87b627098167966fbd991e7304bd68b1a70c6ea54bcd3036e60af1f9b4396a7", Pod:"calico-kube-controllers-84ff6b6999-stszx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicfef2b149c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.414 [INFO][4738] k8s.go 608: Cleaning up netns ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.414 [INFO][4738] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" iface="eth0" netns="" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.415 [INFO][4738] k8s.go 615: Releasing IP address(es) ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.415 [INFO][4738] utils.go 188: Calico CNI releasing IP address ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.440 [INFO][4744] ipam_plugin.go 417: Releasing address using handleID ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.440 [INFO][4744] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.440 [INFO][4744] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.449 [WARNING][4744] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.449 [INFO][4744] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" HandleID="k8s-pod-network.8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Workload="ci--3975--2--2--1--c965454201-k8s-calico--kube--controllers--84ff6b6999--stszx-eth0" Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.451 [INFO][4744] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:46.454000 containerd[1477]: 2024-10-08 20:15:46.452 [INFO][4738] k8s.go 621: Teardown processing complete. ContainerID="8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef" Oct 8 20:15:46.454973 containerd[1477]: time="2024-10-08T20:15:46.454030706Z" level=info msg="TearDown network for sandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\" successfully" Oct 8 20:15:46.457636 containerd[1477]: time="2024-10-08T20:15:46.457596197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:15:46.457738 containerd[1477]: time="2024-10-08T20:15:46.457709280Z" level=info msg="RemovePodSandbox \"8ab659019f3223d3378382b258615382412004cbcb4672ff75a5e69ec84134ef\" returns successfully" Oct 8 20:15:46.458880 containerd[1477]: time="2024-10-08T20:15:46.458675025Z" level=info msg="StopPodSandbox for \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\"" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.527 [WARNING][4762] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7004f452-d92e-454b-be56-1d3a59702cfb", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e", Pod:"csi-node-driver-mkwvt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali892c3cbe488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.527 [INFO][4762] k8s.go 608: Cleaning up netns ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.527 [INFO][4762] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" iface="eth0" netns="" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.527 [INFO][4762] k8s.go 615: Releasing IP address(es) ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.527 [INFO][4762] utils.go 188: Calico CNI releasing IP address ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.547 [INFO][4769] ipam_plugin.go 417: Releasing address using handleID ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.548 [INFO][4769] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.548 [INFO][4769] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.560 [WARNING][4769] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.561 [INFO][4769] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.563 [INFO][4769] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:46.569165 containerd[1477]: 2024-10-08 20:15:46.566 [INFO][4762] k8s.go 621: Teardown processing complete. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.569165 containerd[1477]: time="2024-10-08T20:15:46.568836340Z" level=info msg="TearDown network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\" successfully" Oct 8 20:15:46.569165 containerd[1477]: time="2024-10-08T20:15:46.568861741Z" level=info msg="StopPodSandbox for \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\" returns successfully" Oct 8 20:15:46.572091 containerd[1477]: time="2024-10-08T20:15:46.571489048Z" level=info msg="RemovePodSandbox for \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\"" Oct 8 20:15:46.572091 containerd[1477]: time="2024-10-08T20:15:46.571523249Z" level=info msg="Forcibly stopping sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\"" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.621 [WARNING][4787] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7004f452-d92e-454b-be56-1d3a59702cfb", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"3187b67a1482e37a96bb311c25545cac7230246c35d18fb02a65da66534f602e", Pod:"csi-node-driver-mkwvt", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.86.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali892c3cbe488", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.621 [INFO][4787] k8s.go 608: Cleaning up netns ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.622 [INFO][4787] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" iface="eth0" netns="" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.622 [INFO][4787] k8s.go 615: Releasing IP address(es) ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.622 [INFO][4787] utils.go 188: Calico CNI releasing IP address ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.645 [INFO][4793] ipam_plugin.go 417: Releasing address using handleID ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.645 [INFO][4793] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.646 [INFO][4793] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.654 [WARNING][4793] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.654 [INFO][4793] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" HandleID="k8s-pod-network.d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Workload="ci--3975--2--2--1--c965454201-k8s-csi--node--driver--mkwvt-eth0" Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.656 [INFO][4793] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:46.659162 containerd[1477]: 2024-10-08 20:15:46.657 [INFO][4787] k8s.go 621: Teardown processing complete. ContainerID="d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6" Oct 8 20:15:46.660619 containerd[1477]: time="2024-10-08T20:15:46.659781400Z" level=info msg="TearDown network for sandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\" successfully" Oct 8 20:15:46.664116 containerd[1477]: time="2024-10-08T20:15:46.664079031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:15:46.664376 containerd[1477]: time="2024-10-08T20:15:46.664302997Z" level=info msg="RemovePodSandbox \"d484098e16b5f56bb414332a0417cee1c7cbbe3876bb7a84916523018f5883a6\" returns successfully" Oct 8 20:15:46.664886 containerd[1477]: time="2024-10-08T20:15:46.664859211Z" level=info msg="StopPodSandbox for \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\"" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.714 [WARNING][4811] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"46c2c0a1-3a28-46c7-9a83-9c80234de025", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340", Pod:"coredns-76f75df574-bfl8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8afa0a85484", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.714 [INFO][4811] k8s.go 608: Cleaning up netns ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.715 [INFO][4811] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" iface="eth0" netns="" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.715 [INFO][4811] k8s.go 615: Releasing IP address(es) ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.715 [INFO][4811] utils.go 188: Calico CNI releasing IP address ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.751 [INFO][4817] ipam_plugin.go 417: Releasing address using handleID ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.751 [INFO][4817] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.751 [INFO][4817] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.769 [WARNING][4817] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.769 [INFO][4817] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.773 [INFO][4817] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:46.777657 containerd[1477]: 2024-10-08 20:15:46.775 [INFO][4811] k8s.go 621: Teardown processing complete. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.779752 containerd[1477]: time="2024-10-08T20:15:46.777680075Z" level=info msg="TearDown network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\" successfully" Oct 8 20:15:46.779752 containerd[1477]: time="2024-10-08T20:15:46.777722316Z" level=info msg="StopPodSandbox for \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\" returns successfully" Oct 8 20:15:46.779752 containerd[1477]: time="2024-10-08T20:15:46.778463095Z" level=info msg="RemovePodSandbox for \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\"" Oct 8 20:15:46.779752 containerd[1477]: time="2024-10-08T20:15:46.778518296Z" level=info msg="Forcibly stopping sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\"" Oct 8 20:15:46.847339 systemd[1]: run-containerd-runc-k8s.io-3ff2fb6f14414da34796899bbf6e78e866842b8c38188b87fbc8e871de58e778-runc.sVIfwV.mount: Deactivated successfully. Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.841 [WARNING][4836] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"46c2c0a1-3a28-46c7-9a83-9c80234de025", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"d8d0821991b16198875d8241de00533c82d4b25c4cfa98d8293cdf763ca28340", Pod:"coredns-76f75df574-bfl8h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8afa0a85484", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.842 [INFO][4836] k8s.go 608: Cleaning up netns ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.842 [INFO][4836] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" iface="eth0" netns="" Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.842 [INFO][4836] k8s.go 615: Releasing IP address(es) ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.842 [INFO][4836] utils.go 188: Calico CNI releasing IP address ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.885 [INFO][4852] ipam_plugin.go 417: Releasing address using handleID ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.885 [INFO][4852] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.885 [INFO][4852] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.894 [WARNING][4852] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.894 [INFO][4852] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" HandleID="k8s-pod-network.fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--bfl8h-eth0" Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.895 [INFO][4852] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:46.904259 containerd[1477]: 2024-10-08 20:15:46.899 [INFO][4836] k8s.go 621: Teardown processing complete. ContainerID="fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b" Oct 8 20:15:46.904259 containerd[1477]: time="2024-10-08T20:15:46.902494247Z" level=info msg="TearDown network for sandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\" successfully" Oct 8 20:15:46.909776 containerd[1477]: time="2024-10-08T20:15:46.909624270Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:15:46.910425 containerd[1477]: time="2024-10-08T20:15:46.910393770Z" level=info msg="RemovePodSandbox \"fe24edf8a1c6eca32b34b47dba1a8fd7fa61e02b689b5f610947a4731b2c6b8b\" returns successfully" Oct 8 20:15:46.911189 containerd[1477]: time="2024-10-08T20:15:46.910961384Z" level=info msg="StopPodSandbox for \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\"" Oct 8 20:15:46.967344 kubelet[2770]: I1008 20:15:46.966484 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-mkwvt" podStartSLOduration=33.526286334 podStartE2EDuration="37.966444172s" podCreationTimestamp="2024-10-08 20:15:09 +0000 UTC" firstStartedPulling="2024-10-08 20:15:38.777958754 +0000 UTC m=+52.682550434" lastFinishedPulling="2024-10-08 20:15:43.218116512 +0000 UTC m=+57.122708272" observedRunningTime="2024-10-08 20:15:43.53092631 +0000 UTC m=+57.435518070" watchObservedRunningTime="2024-10-08 20:15:46.966444172 +0000 UTC m=+60.871035892" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:46.976 [WARNING][4883] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b0a205fa-dab2-484e-9cb0-449d3a8666e8", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b", Pod:"coredns-76f75df574-94tfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid349afedd36", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:46.976 [INFO][4883] k8s.go 608: Cleaning up netns ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:46.976 [INFO][4883] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" iface="eth0" netns="" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:46.976 [INFO][4883] k8s.go 615: Releasing IP address(es) ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:46.976 [INFO][4883] utils.go 188: Calico CNI releasing IP address ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:46.999 [INFO][4890] ipam_plugin.go 417: Releasing address using handleID ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:47.000 [INFO][4890] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:47.000 [INFO][4890] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:47.012 [WARNING][4890] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:47.012 [INFO][4890] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:47.017 [INFO][4890] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:47.020982 containerd[1477]: 2024-10-08 20:15:47.019 [INFO][4883] k8s.go 621: Teardown processing complete. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.021722 containerd[1477]: time="2024-10-08T20:15:47.021030056Z" level=info msg="TearDown network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\" successfully" Oct 8 20:15:47.021722 containerd[1477]: time="2024-10-08T20:15:47.021060977Z" level=info msg="StopPodSandbox for \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\" returns successfully" Oct 8 20:15:47.022445 containerd[1477]: time="2024-10-08T20:15:47.022050723Z" level=info msg="RemovePodSandbox for \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\"" Oct 8 20:15:47.022445 containerd[1477]: time="2024-10-08T20:15:47.022088284Z" level=info msg="Forcibly stopping sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\"" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.065 [WARNING][4908] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b0a205fa-dab2-484e-9cb0-449d3a8666e8", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"ac674d5508c4f7101525950aa2146f425bf7a4a3d3984c744dd31f7aa5df6b1b", Pod:"coredns-76f75df574-94tfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid349afedd36", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.065 [INFO][4908] k8s.go 608: Cleaning up netns ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.065 [INFO][4908] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" iface="eth0" netns="" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.065 [INFO][4908] k8s.go 615: Releasing IP address(es) ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.065 [INFO][4908] utils.go 188: Calico CNI releasing IP address ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.084 [INFO][4914] ipam_plugin.go 417: Releasing address using handleID ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.084 [INFO][4914] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.084 [INFO][4914] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.100 [WARNING][4914] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.100 [INFO][4914] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" HandleID="k8s-pod-network.fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Workload="ci--3975--2--2--1--c965454201-k8s-coredns--76f75df574--94tfc-eth0" Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.104 [INFO][4914] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:47.108872 containerd[1477]: 2024-10-08 20:15:47.106 [INFO][4908] k8s.go 621: Teardown processing complete. ContainerID="fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa" Oct 8 20:15:47.108872 containerd[1477]: time="2024-10-08T20:15:47.108843354Z" level=info msg="TearDown network for sandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\" successfully" Oct 8 20:15:47.116423 containerd[1477]: time="2024-10-08T20:15:47.116351827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 20:15:47.116577 containerd[1477]: time="2024-10-08T20:15:47.116479590Z" level=info msg="RemovePodSandbox \"fcd3e8e3abd573d3a1c93fd0ec42ee5fdf50612fbcbc3f6dfb607525a797ebfa\" returns successfully" Oct 8 20:15:48.601249 kubelet[2770]: I1008 20:15:48.599487 2770 topology_manager.go:215] "Topology Admit Handler" podUID="fd65c8e1-89fb-4280-880a-568661310f46" podNamespace="calico-apiserver" podName="calico-apiserver-669f8b5c56-vjgk5" Oct 8 20:15:48.607673 systemd[1]: Created slice kubepods-besteffort-podfd65c8e1_89fb_4280_880a_568661310f46.slice - libcontainer container kubepods-besteffort-podfd65c8e1_89fb_4280_880a_568661310f46.slice. Oct 8 20:15:48.624817 kubelet[2770]: I1008 20:15:48.624419 2770 topology_manager.go:215] "Topology Admit Handler" podUID="dbe106d4-2fe2-4809-99f8-08649d02f414" podNamespace="calico-apiserver" podName="calico-apiserver-669f8b5c56-6bkv7" Oct 8 20:15:48.635174 systemd[1]: Created slice kubepods-besteffort-poddbe106d4_2fe2_4809_99f8_08649d02f414.slice - libcontainer container kubepods-besteffort-poddbe106d4_2fe2_4809_99f8_08649d02f414.slice. Oct 8 20:15:48.690189 kubelet[2770]: I1008 20:15:48.689982 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk75p\" (UniqueName: \"kubernetes.io/projected/fd65c8e1-89fb-4280-880a-568661310f46-kube-api-access-gk75p\") pod \"calico-apiserver-669f8b5c56-vjgk5\" (UID: \"fd65c8e1-89fb-4280-880a-568661310f46\") " pod="calico-apiserver/calico-apiserver-669f8b5c56-vjgk5" Oct 8 20:15:48.690189 kubelet[2770]: I1008 20:15:48.690074 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fd65c8e1-89fb-4280-880a-568661310f46-calico-apiserver-certs\") pod \"calico-apiserver-669f8b5c56-vjgk5\" (UID: \"fd65c8e1-89fb-4280-880a-568661310f46\") " pod="calico-apiserver/calico-apiserver-669f8b5c56-vjgk5" Oct 8 20:15:48.690189 kubelet[2770]: I1008 20:15:48.690111 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbe106d4-2fe2-4809-99f8-08649d02f414-calico-apiserver-certs\") pod \"calico-apiserver-669f8b5c56-6bkv7\" (UID: \"dbe106d4-2fe2-4809-99f8-08649d02f414\") " pod="calico-apiserver/calico-apiserver-669f8b5c56-6bkv7" Oct 8 20:15:48.690189 kubelet[2770]: I1008 20:15:48.690134 2770 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvqw\" (UniqueName: \"kubernetes.io/projected/dbe106d4-2fe2-4809-99f8-08649d02f414-kube-api-access-rbvqw\") pod \"calico-apiserver-669f8b5c56-6bkv7\" (UID: \"dbe106d4-2fe2-4809-99f8-08649d02f414\") " pod="calico-apiserver/calico-apiserver-669f8b5c56-6bkv7" Oct 8 20:15:48.791175 kubelet[2770]: E1008 20:15:48.790691 2770 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:15:48.791175 kubelet[2770]: E1008 20:15:48.790811 2770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe106d4-2fe2-4809-99f8-08649d02f414-calico-apiserver-certs podName:dbe106d4-2fe2-4809-99f8-08649d02f414 nodeName:}" failed. No retries permitted until 2024-10-08 20:15:49.290783728 +0000 UTC m=+63.195375488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dbe106d4-2fe2-4809-99f8-08649d02f414-calico-apiserver-certs") pod "calico-apiserver-669f8b5c56-6bkv7" (UID: "dbe106d4-2fe2-4809-99f8-08649d02f414") : secret "calico-apiserver-certs" not found Oct 8 20:15:48.791175 kubelet[2770]: E1008 20:15:48.791087 2770 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:15:48.791175 kubelet[2770]: E1008 20:15:48.791184 2770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd65c8e1-89fb-4280-880a-568661310f46-calico-apiserver-certs podName:fd65c8e1-89fb-4280-880a-568661310f46 nodeName:}" failed. No retries permitted until 2024-10-08 20:15:49.291162537 +0000 UTC m=+63.195754257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/fd65c8e1-89fb-4280-880a-568661310f46-calico-apiserver-certs") pod "calico-apiserver-669f8b5c56-vjgk5" (UID: "fd65c8e1-89fb-4280-880a-568661310f46") : secret "calico-apiserver-certs" not found Oct 8 20:15:49.294727 kubelet[2770]: E1008 20:15:49.294596 2770 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:15:49.294727 kubelet[2770]: E1008 20:15:49.294700 2770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbe106d4-2fe2-4809-99f8-08649d02f414-calico-apiserver-certs podName:dbe106d4-2fe2-4809-99f8-08649d02f414 nodeName:}" failed. No retries permitted until 2024-10-08 20:15:50.294674139 +0000 UTC m=+64.199265899 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/dbe106d4-2fe2-4809-99f8-08649d02f414-calico-apiserver-certs") pod "calico-apiserver-669f8b5c56-6bkv7" (UID: "dbe106d4-2fe2-4809-99f8-08649d02f414") : secret "calico-apiserver-certs" not found Oct 8 20:15:49.294727 kubelet[2770]: E1008 20:15:49.294700 2770 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 20:15:49.295113 kubelet[2770]: E1008 20:15:49.294749 2770 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fd65c8e1-89fb-4280-880a-568661310f46-calico-apiserver-certs podName:fd65c8e1-89fb-4280-880a-568661310f46 nodeName:}" failed. No retries permitted until 2024-10-08 20:15:50.29473618 +0000 UTC m=+64.199327900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/fd65c8e1-89fb-4280-880a-568661310f46-calico-apiserver-certs") pod "calico-apiserver-669f8b5c56-vjgk5" (UID: "fd65c8e1-89fb-4280-880a-568661310f46") : secret "calico-apiserver-certs" not found Oct 8 20:15:50.413281 containerd[1477]: time="2024-10-08T20:15:50.413221981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669f8b5c56-vjgk5,Uid:fd65c8e1-89fb-4280-880a-568661310f46,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:15:50.442190 containerd[1477]: time="2024-10-08T20:15:50.441352942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669f8b5c56-6bkv7,Uid:dbe106d4-2fe2-4809-99f8-08649d02f414,Namespace:calico-apiserver,Attempt:0,}" Oct 8 20:15:50.611283 systemd-networkd[1384]: calid9cf16afdeb: Link UP Oct 8 20:15:50.613887 systemd-networkd[1384]: calid9cf16afdeb: Gained carrier Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.482 [INFO][4934] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0 calico-apiserver-669f8b5c56- calico-apiserver fd65c8e1-89fb-4280-880a-568661310f46 829 0 2024-10-08 20:15:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:669f8b5c56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-1-c965454201 calico-apiserver-669f8b5c56-vjgk5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid9cf16afdeb [] []}} ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.482 [INFO][4934] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.544 [INFO][4951] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" HandleID="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Workload="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.558 [INFO][4951] ipam_plugin.go 270: Auto assigning IP ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" HandleID="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Workload="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-1-c965454201", "pod":"calico-apiserver-669f8b5c56-vjgk5", "timestamp":"2024-10-08 20:15:50.544026693 +0000 UTC"}, Hostname:"ci-3975-2-2-1-c965454201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.558 [INFO][4951] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.558 [INFO][4951] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.558 [INFO][4951] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-1-c965454201' Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.560 [INFO][4951] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.565 [INFO][4951] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.571 [INFO][4951] ipam.go 489: Trying affinity for 192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.579 [INFO][4951] ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.583 [INFO][4951] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.583 [INFO][4951] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.585 [INFO][4951] ipam.go 1685: Creating new handle: k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.591 [INFO][4951] ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.599 [INFO][4951] ipam.go 1216: Successfully claimed IPs: [192.168.86.5/26] block=192.168.86.0/26 handle="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.600 [INFO][4951] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.5/26] handle="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.600 [INFO][4951] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:50.640696 containerd[1477]: 2024-10-08 20:15:50.600 [INFO][4951] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.86.5/26] IPv6=[] ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" HandleID="k8s-pod-network.058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Workload="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" Oct 8 20:15:50.641258 containerd[1477]: 2024-10-08 20:15:50.603 [INFO][4934] k8s.go 386: Populated endpoint ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0", GenerateName:"calico-apiserver-669f8b5c56-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd65c8e1-89fb-4280-880a-568661310f46", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669f8b5c56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"", Pod:"calico-apiserver-669f8b5c56-vjgk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9cf16afdeb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:50.641258 containerd[1477]: 2024-10-08 20:15:50.603 [INFO][4934] k8s.go 387: Calico CNI using IPs: [192.168.86.5/32] ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" Oct 8 20:15:50.641258 containerd[1477]: 2024-10-08 20:15:50.604 [INFO][4934] dataplane_linux.go 68: Setting the host side veth name to calid9cf16afdeb ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" Oct 8 20:15:50.641258 containerd[1477]: 2024-10-08 20:15:50.611 [INFO][4934] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" Oct 8 20:15:50.641258 containerd[1477]: 2024-10-08 20:15:50.612 [INFO][4934] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0", GenerateName:"calico-apiserver-669f8b5c56-", Namespace:"calico-apiserver", SelfLink:"", UID:"fd65c8e1-89fb-4280-880a-568661310f46", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669f8b5c56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df", Pod:"calico-apiserver-669f8b5c56-vjgk5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid9cf16afdeb", MAC:"16:96:76:9e:ae:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:50.641258 containerd[1477]: 2024-10-08 20:15:50.636 [INFO][4934] k8s.go 500: Wrote updated endpoint to datastore ContainerID="058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-vjgk5" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--vjgk5-eth0" Oct 8 20:15:50.687972 systemd-networkd[1384]: cali32da14793b5: Link UP Oct 8 20:15:50.691208 systemd-networkd[1384]: cali32da14793b5: Gained carrier Oct 8 20:15:50.695013 containerd[1477]: time="2024-10-08T20:15:50.694183621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:50.695013 containerd[1477]: time="2024-10-08T20:15:50.694247622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:50.695013 containerd[1477]: time="2024-10-08T20:15:50.694262183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:50.695013 containerd[1477]: time="2024-10-08T20:15:50.694272223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.521 [INFO][4941] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0 calico-apiserver-669f8b5c56- calico-apiserver dbe106d4-2fe2-4809-99f8-08649d02f414 831 0 2024-10-08 20:15:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:669f8b5c56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975-2-2-1-c965454201 calico-apiserver-669f8b5c56-6bkv7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32da14793b5 [] []}} ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.521 [INFO][4941] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.568 [INFO][4957] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" HandleID="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Workload="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.583 [INFO][4957] ipam_plugin.go 270: Auto assigning IP ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" HandleID="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Workload="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038e5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975-2-2-1-c965454201", "pod":"calico-apiserver-669f8b5c56-6bkv7", "timestamp":"2024-10-08 20:15:50.568637643 +0000 UTC"}, Hostname:"ci-3975-2-2-1-c965454201", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.583 [INFO][4957] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.600 [INFO][4957] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.600 [INFO][4957] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975-2-2-1-c965454201' Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.604 [INFO][4957] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.620 [INFO][4957] ipam.go 372: Looking up existing affinities for host host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.634 [INFO][4957] ipam.go 489: Trying affinity for 192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.638 [INFO][4957] ipam.go 155: Attempting to load block cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.644 [INFO][4957] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.86.0/26 host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.644 [INFO][4957] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.86.0/26 handle="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.648 [INFO][4957] ipam.go 1685: Creating new handle: k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429 Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.655 [INFO][4957] ipam.go 1203: Writing block in order to claim IPs block=192.168.86.0/26 handle="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.673 [INFO][4957] ipam.go 1216: Successfully claimed IPs: [192.168.86.6/26] block=192.168.86.0/26 handle="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.675 [INFO][4957] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.86.6/26] handle="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" host="ci-3975-2-2-1-c965454201" Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.675 [INFO][4957] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 20:15:50.719432 containerd[1477]: 2024-10-08 20:15:50.675 [INFO][4957] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.86.6/26] IPv6=[] ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" HandleID="k8s-pod-network.92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Workload="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" Oct 8 20:15:50.721436 containerd[1477]: 2024-10-08 20:15:50.682 [INFO][4941] k8s.go 386: Populated endpoint ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0", GenerateName:"calico-apiserver-669f8b5c56-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbe106d4-2fe2-4809-99f8-08649d02f414", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669f8b5c56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"", Pod:"calico-apiserver-669f8b5c56-6bkv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32da14793b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:50.721436 containerd[1477]: 2024-10-08 20:15:50.682 [INFO][4941] k8s.go 387: Calico CNI using IPs: [192.168.86.6/32] ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" Oct 8 20:15:50.721436 containerd[1477]: 2024-10-08 20:15:50.682 [INFO][4941] dataplane_linux.go 68: Setting the host side veth name to cali32da14793b5 ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" Oct 8 20:15:50.721436 containerd[1477]: 2024-10-08 20:15:50.692 [INFO][4941] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" Oct 8 20:15:50.721436 containerd[1477]: 2024-10-08 20:15:50.697 [INFO][4941] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0", GenerateName:"calico-apiserver-669f8b5c56-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbe106d4-2fe2-4809-99f8-08649d02f414", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 20, 15, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"669f8b5c56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975-2-2-1-c965454201", ContainerID:"92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429", Pod:"calico-apiserver-669f8b5c56-6bkv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32da14793b5", MAC:"6a:1a:f0:35:fa:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 20:15:50.721436 containerd[1477]: 2024-10-08 20:15:50.712 [INFO][4941] k8s.go 500: Wrote updated endpoint to datastore ContainerID="92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429" Namespace="calico-apiserver" Pod="calico-apiserver-669f8b5c56-6bkv7" WorkloadEndpoint="ci--3975--2--2--1--c965454201-k8s-calico--apiserver--669f8b5c56--6bkv7-eth0" Oct 8 20:15:50.729736 systemd[1]: Started cri-containerd-058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df.scope - libcontainer container 058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df. Oct 8 20:15:50.770488 containerd[1477]: time="2024-10-08T20:15:50.769693876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 20:15:50.770488 containerd[1477]: time="2024-10-08T20:15:50.769756637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:50.770488 containerd[1477]: time="2024-10-08T20:15:50.769777198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 20:15:50.770488 containerd[1477]: time="2024-10-08T20:15:50.769790198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 20:15:50.789666 systemd[1]: Started cri-containerd-92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429.scope - libcontainer container 92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429. Oct 8 20:15:50.831005 containerd[1477]: time="2024-10-08T20:15:50.830963766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669f8b5c56-vjgk5,Uid:fd65c8e1-89fb-4280-880a-568661310f46,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df\"" Oct 8 20:15:50.832737 containerd[1477]: time="2024-10-08T20:15:50.832658289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:15:50.850735 containerd[1477]: time="2024-10-08T20:15:50.850658990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-669f8b5c56-6bkv7,Uid:dbe106d4-2fe2-4809-99f8-08649d02f414,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429\"" Oct 8 20:15:51.912764 systemd-networkd[1384]: calid9cf16afdeb: Gained IPv6LL Oct 8 20:15:52.617674 systemd-networkd[1384]: cali32da14793b5: Gained IPv6LL Oct 8 20:15:53.587873 containerd[1477]: time="2024-10-08T20:15:53.587543849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:53.589302 containerd[1477]: time="2024-10-08T20:15:53.589260932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 20:15:53.590476 containerd[1477]: time="2024-10-08T20:15:53.590402162Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:53.593678 containerd[1477]: time="2024-10-08T20:15:53.592706621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:53.593678 containerd[1477]: time="2024-10-08T20:15:53.593534042Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.760838791s" Oct 8 20:15:53.593678 containerd[1477]: time="2024-10-08T20:15:53.593566002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 20:15:53.595079 containerd[1477]: time="2024-10-08T20:15:53.595036720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 20:15:53.596346 containerd[1477]: time="2024-10-08T20:15:53.596288472Z" level=info msg="CreateContainer within sandbox \"058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:15:53.611169 containerd[1477]: time="2024-10-08T20:15:53.611098450Z" level=info msg="CreateContainer within sandbox \"058c9129fa8158da85c706a63c4ccd620963d9f1fc96cac2a81735985701f3df\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4aaa051f5d40e0a04eebf24513fa36e8ea8a0617271ded41e8251566b6812359\"" Oct 8 20:15:53.612515 containerd[1477]: time="2024-10-08T20:15:53.612492366Z" level=info msg="StartContainer for \"4aaa051f5d40e0a04eebf24513fa36e8ea8a0617271ded41e8251566b6812359\"" Oct 8 20:15:53.651617 systemd[1]: Started cri-containerd-4aaa051f5d40e0a04eebf24513fa36e8ea8a0617271ded41e8251566b6812359.scope - libcontainer container 4aaa051f5d40e0a04eebf24513fa36e8ea8a0617271ded41e8251566b6812359. Oct 8 20:15:53.693758 containerd[1477]: time="2024-10-08T20:15:53.693719282Z" level=info msg="StartContainer for \"4aaa051f5d40e0a04eebf24513fa36e8ea8a0617271ded41e8251566b6812359\" returns successfully" Oct 8 20:15:54.010834 containerd[1477]: time="2024-10-08T20:15:54.009917841Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 20:15:54.012414 containerd[1477]: time="2024-10-08T20:15:54.012380104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 20:15:54.016584 containerd[1477]: time="2024-10-08T20:15:54.016508890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 421.431848ms" Oct 8 20:15:54.016727 containerd[1477]: time="2024-10-08T20:15:54.016705135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 20:15:54.019279 containerd[1477]: time="2024-10-08T20:15:54.019119676Z" level=info msg="CreateContainer within sandbox \"92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 20:15:54.041875 containerd[1477]: time="2024-10-08T20:15:54.041779295Z" level=info msg="CreateContainer within sandbox \"92c7e064c2923323161c6d412d65ca880945f07f4226dcc6a2c6e20967368429\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"64f2a904cd67fb83bed0093e4fd4e081c8d934c195919921c64e9b9abfcd6a8d\"" Oct 8 20:15:54.046580 containerd[1477]: time="2024-10-08T20:15:54.046395573Z" level=info msg="StartContainer for \"64f2a904cd67fb83bed0093e4fd4e081c8d934c195919921c64e9b9abfcd6a8d\"" Oct 8 20:15:54.074779 systemd[1]: Started cri-containerd-64f2a904cd67fb83bed0093e4fd4e081c8d934c195919921c64e9b9abfcd6a8d.scope - libcontainer container 64f2a904cd67fb83bed0093e4fd4e081c8d934c195919921c64e9b9abfcd6a8d. Oct 8 20:15:54.124746 containerd[1477]: time="2024-10-08T20:15:54.124707172Z" level=info msg="StartContainer for \"64f2a904cd67fb83bed0093e4fd4e081c8d934c195919921c64e9b9abfcd6a8d\" returns successfully" Oct 8 20:15:54.600601 kubelet[2770]: I1008 20:15:54.599700 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-669f8b5c56-6bkv7" podStartSLOduration=3.436419026 podStartE2EDuration="6.599654698s" podCreationTimestamp="2024-10-08 20:15:48 +0000 UTC" firstStartedPulling="2024-10-08 20:15:50.85374867 +0000 UTC m=+64.758340430" lastFinishedPulling="2024-10-08 20:15:54.016984382 +0000 UTC m=+67.921576102" observedRunningTime="2024-10-08 20:15:54.57899133 +0000 UTC m=+68.483583050" watchObservedRunningTime="2024-10-08 20:15:54.599654698 +0000 UTC m=+68.504246418" Oct 8 20:15:55.230675 kubelet[2770]: I1008 20:15:55.230041 2770 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-669f8b5c56-vjgk5" podStartSLOduration=4.468279891 podStartE2EDuration="7.229998385s" podCreationTimestamp="2024-10-08 20:15:48 +0000 UTC" firstStartedPulling="2024-10-08 20:15:50.832378282 +0000 UTC m=+64.736970002" lastFinishedPulling="2024-10-08 20:15:53.594096776 +0000 UTC m=+67.498688496" observedRunningTime="2024-10-08 20:15:54.602445569 +0000 UTC m=+68.507037289" watchObservedRunningTime="2024-10-08 20:15:55.229998385 +0000 UTC m=+69.134590105" Oct 8 20:15:55.948195 systemd[1]: sshd@1-49.13.72.235:22-61.147.204.98:34147.service: Deactivated successfully. Oct 8 20:16:12.852611 systemd[1]: Started sshd@13-49.13.72.235:22-121.142.87.218:35338.service - OpenSSH per-connection server daemon (121.142.87.218:35338). Oct 8 20:16:14.392662 sshd[5241]: Invalid user seye from 121.142.87.218 port 35338 Oct 8 20:16:14.686047 sshd[5241]: Received disconnect from 121.142.87.218 port 35338:11: Bye Bye [preauth] Oct 8 20:16:14.686047 sshd[5241]: Disconnected from invalid user seye 121.142.87.218 port 35338 [preauth] Oct 8 20:16:14.697517 systemd[1]: sshd@13-49.13.72.235:22-121.142.87.218:35338.service: Deactivated successfully. Oct 8 20:16:16.474989 systemd[1]: Started sshd@14-49.13.72.235:22-27.254.149.199:56958.service - OpenSSH per-connection server daemon (27.254.149.199:56958). Oct 8 20:16:17.606887 sshd[5251]: Invalid user jamshidiye from 27.254.149.199 port 56958 Oct 8 20:16:17.818205 sshd[5251]: Received disconnect from 27.254.149.199 port 56958:11: Bye Bye [preauth] Oct 8 20:16:17.818205 sshd[5251]: Disconnected from invalid user jamshidiye 27.254.149.199 port 56958 [preauth] Oct 8 20:16:17.820766 systemd[1]: sshd@14-49.13.72.235:22-27.254.149.199:56958.service: Deactivated successfully. Oct 8 20:16:22.132560 systemd[1]: run-containerd-runc-k8s.io-e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f-runc.9qHqX7.mount: Deactivated successfully. Oct 8 20:16:59.720649 systemd[1]: Started sshd@15-49.13.72.235:22-121.142.87.218:47510.service - OpenSSH per-connection server daemon (121.142.87.218:47510). Oct 8 20:17:01.294742 systemd[1]: Started sshd@16-49.13.72.235:22-27.254.149.199:40832.service - OpenSSH per-connection server daemon (27.254.149.199:40832). Oct 8 20:17:01.364614 sshd[5371]: Invalid user sjpower from 121.142.87.218 port 47510 Oct 8 20:17:01.676421 sshd[5371]: Received disconnect from 121.142.87.218 port 47510:11: Bye Bye [preauth] Oct 8 20:17:01.676421 sshd[5371]: Disconnected from invalid user sjpower 121.142.87.218 port 47510 [preauth] Oct 8 20:17:01.684357 systemd[1]: sshd@15-49.13.72.235:22-121.142.87.218:47510.service: Deactivated successfully. Oct 8 20:17:02.398923 sshd[5393]: Invalid user xioaming from 27.254.149.199 port 40832 Oct 8 20:17:02.604406 sshd[5393]: Received disconnect from 27.254.149.199 port 40832:11: Bye Bye [preauth] Oct 8 20:17:02.604629 sshd[5393]: Disconnected from invalid user xioaming 27.254.149.199 port 40832 [preauth] Oct 8 20:17:02.609739 systemd[1]: sshd@16-49.13.72.235:22-27.254.149.199:40832.service: Deactivated successfully. Oct 8 20:17:46.321873 systemd[1]: Started sshd@17-49.13.72.235:22-27.254.149.199:52944.service - OpenSSH per-connection server daemon (27.254.149.199:52944). Oct 8 20:17:47.419690 sshd[5483]: Invalid user green from 27.254.149.199 port 52944 Oct 8 20:17:47.621054 sshd[5483]: Received disconnect from 27.254.149.199 port 52944:11: Bye Bye [preauth] Oct 8 20:17:47.621054 sshd[5483]: Disconnected from invalid user green 27.254.149.199 port 52944 [preauth] Oct 8 20:17:47.625493 systemd[1]: sshd@17-49.13.72.235:22-27.254.149.199:52944.service: Deactivated successfully. Oct 8 20:17:48.396836 systemd[1]: Started sshd@18-49.13.72.235:22-121.142.87.218:59688.service - OpenSSH per-connection server daemon (121.142.87.218:59688). Oct 8 20:17:49.886672 sshd[5512]: Invalid user mingyuanz from 121.142.87.218 port 59688 Oct 8 20:17:50.173850 sshd[5512]: Received disconnect from 121.142.87.218 port 59688:11: Bye Bye [preauth] Oct 8 20:17:50.173850 sshd[5512]: Disconnected from invalid user mingyuanz 121.142.87.218 port 59688 [preauth] Oct 8 20:17:50.176849 systemd[1]: sshd@18-49.13.72.235:22-121.142.87.218:59688.service: Deactivated successfully. Oct 8 20:18:16.836915 systemd[1]: run-containerd-runc-k8s.io-3ff2fb6f14414da34796899bbf6e78e866842b8c38188b87fbc8e871de58e778-runc.alQs3c.mount: Deactivated successfully. Oct 8 20:18:30.813700 systemd[1]: Started sshd@19-49.13.72.235:22-27.254.149.199:36824.service - OpenSSH per-connection server daemon (27.254.149.199:36824). Oct 8 20:18:31.940281 sshd[5629]: Invalid user liujia from 27.254.149.199 port 36824 Oct 8 20:18:32.147807 sshd[5629]: Received disconnect from 27.254.149.199 port 36824:11: Bye Bye [preauth] Oct 8 20:18:32.147807 sshd[5629]: Disconnected from invalid user liujia 27.254.149.199 port 36824 [preauth] Oct 8 20:18:32.151339 systemd[1]: sshd@19-49.13.72.235:22-27.254.149.199:36824.service: Deactivated successfully. Oct 8 20:18:36.549958 systemd[1]: Started sshd@20-49.13.72.235:22-121.142.87.218:43638.service - OpenSSH per-connection server daemon (121.142.87.218:43638). Oct 8 20:18:37.989904 update_engine[1460]: I1008 20:18:37.989699 1460 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 20:18:37.989904 update_engine[1460]: I1008 20:18:37.989883 1460 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 20:18:37.990622 update_engine[1460]: I1008 20:18:37.990510 1460 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.991925 1460 omaha_request_params.cc:62] Current group set to stable Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.992090 1460 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.992101 1460 update_attempter.cc:643] Scheduling an action processor start. Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.992123 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.992177 1460 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.992272 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.992281 1460 omaha_request_action.cc:272] Request: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: Oct 8 20:18:37.992396 update_engine[1460]: I1008 20:18:37.992288 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:18:37.993676 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 20:18:37.996243 update_engine[1460]: I1008 20:18:37.996203 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:18:38.000272 update_engine[1460]: I1008 20:18:38.000234 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:18:38.003753 update_engine[1460]: E1008 20:18:38.003705 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:18:38.003880 update_engine[1460]: I1008 20:18:38.003774 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 20:18:38.061553 sshd[5638]: Invalid user implicit from 121.142.87.218 port 43638 Oct 8 20:18:38.310919 systemd[1]: Started sshd@21-49.13.72.235:22-188.18.49.50:35829.service - OpenSSH per-connection server daemon (188.18.49.50:35829). Oct 8 20:18:38.353266 sshd[5638]: Received disconnect from 121.142.87.218 port 43638:11: Bye Bye [preauth] Oct 8 20:18:38.353266 sshd[5638]: Disconnected from invalid user implicit 121.142.87.218 port 43638 [preauth] Oct 8 20:18:38.356844 systemd[1]: sshd@20-49.13.72.235:22-121.142.87.218:43638.service: Deactivated successfully. Oct 8 20:18:38.862861 sshd[5641]: Invalid user majidmsn from 188.18.49.50 port 35829 Oct 8 20:18:38.954207 sshd[5641]: Received disconnect from 188.18.49.50 port 35829:11: Bye Bye [preauth] Oct 8 20:18:38.954207 sshd[5641]: Disconnected from invalid user majidmsn 188.18.49.50 port 35829 [preauth] Oct 8 20:18:38.956794 systemd[1]: sshd@21-49.13.72.235:22-188.18.49.50:35829.service: Deactivated successfully. Oct 8 20:18:47.904437 update_engine[1460]: I1008 20:18:47.903721 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:18:47.904437 update_engine[1460]: I1008 20:18:47.903988 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:18:47.904437 update_engine[1460]: I1008 20:18:47.904249 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:18:47.905928 update_engine[1460]: E1008 20:18:47.905828 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:18:47.905928 update_engine[1460]: I1008 20:18:47.905896 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 20:18:57.904708 update_engine[1460]: I1008 20:18:57.904070 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:18:57.904708 update_engine[1460]: I1008 20:18:57.904373 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:18:57.904708 update_engine[1460]: I1008 20:18:57.904644 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:18:57.906180 update_engine[1460]: E1008 20:18:57.906051 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:18:57.906180 update_engine[1460]: I1008 20:18:57.906140 1460 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 8 20:19:07.907416 update_engine[1460]: I1008 20:19:07.907156 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:19:07.907991 update_engine[1460]: I1008 20:19:07.907500 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:19:07.907991 update_engine[1460]: I1008 20:19:07.907802 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:19:07.908685 update_engine[1460]: E1008 20:19:07.908644 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:19:07.908778 update_engine[1460]: I1008 20:19:07.908697 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:19:07.908778 update_engine[1460]: I1008 20:19:07.908704 1460 omaha_request_action.cc:617] Omaha request response: Oct 8 20:19:07.908883 update_engine[1460]: E1008 20:19:07.908792 1460 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 8 20:19:07.908883 update_engine[1460]: I1008 20:19:07.908807 1460 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 8 20:19:07.908883 update_engine[1460]: I1008 20:19:07.908811 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:19:07.908883 update_engine[1460]: I1008 20:19:07.908814 1460 update_attempter.cc:306] Processing Done. Oct 8 20:19:07.908883 update_engine[1460]: E1008 20:19:07.908829 1460 update_attempter.cc:619] Update failed. Oct 8 20:19:07.908883 update_engine[1460]: I1008 20:19:07.908833 1460 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 8 20:19:07.908883 update_engine[1460]: I1008 20:19:07.908836 1460 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 8 20:19:07.908883 update_engine[1460]: I1008 20:19:07.908841 1460 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 8 20:19:07.909308 update_engine[1460]: I1008 20:19:07.908912 1460 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 20:19:07.909308 update_engine[1460]: I1008 20:19:07.908931 1460 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 20:19:07.909308 update_engine[1460]: I1008 20:19:07.908947 1460 omaha_request_action.cc:272] Request: Oct 8 20:19:07.909308 update_engine[1460]: Oct 8 20:19:07.909308 update_engine[1460]: Oct 8 20:19:07.909308 update_engine[1460]: Oct 8 20:19:07.909308 update_engine[1460]: Oct 8 20:19:07.909308 update_engine[1460]: Oct 8 20:19:07.909308 update_engine[1460]: Oct 8 20:19:07.909308 update_engine[1460]: I1008 20:19:07.908953 1460 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 20:19:07.909308 update_engine[1460]: I1008 20:19:07.909065 1460 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.909436 1460 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 20:19:07.910083 update_engine[1460]: E1008 20:19:07.910004 1460 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.910050 1460 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.910056 1460 omaha_request_action.cc:617] Omaha request response: Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.910061 1460 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.910065 1460 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.910069 1460 update_attempter.cc:306] Processing Done. Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.910075 1460 update_attempter.cc:310] Error event sent. Oct 8 20:19:07.910083 update_engine[1460]: I1008 20:19:07.910084 1460 update_check_scheduler.cc:74] Next update check in 49m1s Oct 8 20:19:07.910518 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 8 20:19:07.910518 locksmithd[1497]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 8 20:19:13.831876 systemd[1]: Started sshd@22-49.13.72.235:22-27.254.149.199:48932.service - OpenSSH per-connection server daemon (27.254.149.199:48932). Oct 8 20:19:14.980859 sshd[5748]: Invalid user pxe from 27.254.149.199 port 48932 Oct 8 20:19:15.191352 sshd[5748]: Received disconnect from 27.254.149.199 port 48932:11: Bye Bye [preauth] Oct 8 20:19:15.191352 sshd[5748]: Disconnected from invalid user pxe 27.254.149.199 port 48932 [preauth] Oct 8 20:19:15.194831 systemd[1]: sshd@22-49.13.72.235:22-27.254.149.199:48932.service: Deactivated successfully. Oct 8 20:19:22.125352 systemd[1]: run-containerd-runc-k8s.io-e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f-runc.Y6EuuA.mount: Deactivated successfully. Oct 8 20:19:23.402739 systemd[1]: Started sshd@23-49.13.72.235:22-121.142.87.218:55814.service - OpenSSH per-connection server daemon (121.142.87.218:55814). Oct 8 20:19:24.942915 sshd[5793]: Invalid user fangwei from 121.142.87.218 port 55814 Oct 8 20:19:25.234754 sshd[5793]: Received disconnect from 121.142.87.218 port 55814:11: Bye Bye [preauth] Oct 8 20:19:25.234754 sshd[5793]: Disconnected from invalid user fangwei 121.142.87.218 port 55814 [preauth] Oct 8 20:19:25.237817 systemd[1]: sshd@23-49.13.72.235:22-121.142.87.218:55814.service: Deactivated successfully. Oct 8 20:19:37.250582 systemd[1]: Started sshd@24-49.13.72.235:22-139.178.89.65:53136.service - OpenSSH per-connection server daemon (139.178.89.65:53136). Oct 8 20:19:38.223271 sshd[5814]: Accepted publickey for core from 139.178.89.65 port 53136 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:19:38.226570 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:19:38.240961 systemd-logind[1457]: New session 8 of user core. Oct 8 20:19:38.249572 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 20:19:39.003694 sshd[5814]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:39.010126 systemd[1]: sshd@24-49.13.72.235:22-139.178.89.65:53136.service: Deactivated successfully. Oct 8 20:19:39.014039 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 20:19:39.015022 systemd-logind[1457]: Session 8 logged out. Waiting for processes to exit. Oct 8 20:19:39.016082 systemd-logind[1457]: Removed session 8. Oct 8 20:19:44.174640 systemd[1]: Started sshd@25-49.13.72.235:22-139.178.89.65:53150.service - OpenSSH per-connection server daemon (139.178.89.65:53150). Oct 8 20:19:45.152207 sshd[5828]: Accepted publickey for core from 139.178.89.65 port 53150 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:19:45.154659 sshd[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:19:45.163222 systemd-logind[1457]: New session 9 of user core. Oct 8 20:19:45.170555 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 20:19:45.904096 sshd[5828]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:45.908498 systemd[1]: sshd@25-49.13.72.235:22-139.178.89.65:53150.service: Deactivated successfully. Oct 8 20:19:45.912215 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 20:19:45.915148 systemd-logind[1457]: Session 9 logged out. Waiting for processes to exit. Oct 8 20:19:45.916780 systemd-logind[1457]: Removed session 9. Oct 8 20:19:51.078819 systemd[1]: Started sshd@26-49.13.72.235:22-139.178.89.65:40072.service - OpenSSH per-connection server daemon (139.178.89.65:40072). Oct 8 20:19:52.059859 sshd[5872]: Accepted publickey for core from 139.178.89.65 port 40072 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:19:52.061667 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:19:52.068428 systemd-logind[1457]: New session 10 of user core. Oct 8 20:19:52.072506 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 20:19:52.134268 systemd[1]: run-containerd-runc-k8s.io-e006e7adb3cdf260b1b235b4156ab13f9800389f13731b5d3a40f2115870a80f-runc.riv9tI.mount: Deactivated successfully. Oct 8 20:19:52.814881 sshd[5872]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:52.820526 systemd-logind[1457]: Session 10 logged out. Waiting for processes to exit. Oct 8 20:19:52.821436 systemd[1]: sshd@26-49.13.72.235:22-139.178.89.65:40072.service: Deactivated successfully. Oct 8 20:19:52.827185 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 20:19:52.830292 systemd-logind[1457]: Removed session 10. Oct 8 20:19:52.985798 systemd[1]: Started sshd@27-49.13.72.235:22-139.178.89.65:40082.service - OpenSSH per-connection server daemon (139.178.89.65:40082). Oct 8 20:19:53.947710 sshd[5906]: Accepted publickey for core from 139.178.89.65 port 40082 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:19:53.949624 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:19:53.955433 systemd-logind[1457]: New session 11 of user core. Oct 8 20:19:53.958479 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 20:19:54.758738 sshd[5906]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:54.765501 systemd[1]: sshd@27-49.13.72.235:22-139.178.89.65:40082.service: Deactivated successfully. Oct 8 20:19:54.769419 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 20:19:54.771294 systemd-logind[1457]: Session 11 logged out. Waiting for processes to exit. Oct 8 20:19:54.774044 systemd-logind[1457]: Removed session 11. Oct 8 20:19:54.932918 systemd[1]: Started sshd@28-49.13.72.235:22-139.178.89.65:40092.service - OpenSSH per-connection server daemon (139.178.89.65:40092). Oct 8 20:19:55.924007 sshd[5917]: Accepted publickey for core from 139.178.89.65 port 40092 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:19:55.926093 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:19:55.932493 systemd-logind[1457]: New session 12 of user core. Oct 8 20:19:55.937486 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 20:19:56.694196 sshd[5917]: pam_unix(sshd:session): session closed for user core Oct 8 20:19:56.702089 systemd[1]: sshd@28-49.13.72.235:22-139.178.89.65:40092.service: Deactivated successfully. Oct 8 20:19:56.705430 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 20:19:56.706855 systemd-logind[1457]: Session 12 logged out. Waiting for processes to exit. Oct 8 20:19:56.708154 systemd-logind[1457]: Removed session 12. Oct 8 20:19:57.302722 systemd[1]: Started sshd@29-49.13.72.235:22-27.254.149.199:32806.service - OpenSSH per-connection server daemon (27.254.149.199:32806). Oct 8 20:19:58.403621 sshd[5935]: Invalid user balajiv from 27.254.149.199 port 32806 Oct 8 20:19:58.607799 sshd[5935]: Received disconnect from 27.254.149.199 port 32806:11: Bye Bye [preauth] Oct 8 20:19:58.607799 sshd[5935]: Disconnected from invalid user balajiv 27.254.149.199 port 32806 [preauth] Oct 8 20:19:58.612336 systemd[1]: sshd@29-49.13.72.235:22-27.254.149.199:32806.service: Deactivated successfully. Oct 8 20:20:01.876613 systemd[1]: Started sshd@30-49.13.72.235:22-139.178.89.65:33148.service - OpenSSH per-connection server daemon (139.178.89.65:33148). Oct 8 20:20:02.834545 sshd[5962]: Accepted publickey for core from 139.178.89.65 port 33148 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:20:02.836966 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:20:02.844401 systemd-logind[1457]: New session 13 of user core. Oct 8 20:20:02.853709 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 20:20:03.578692 sshd[5962]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:03.583423 systemd[1]: sshd@30-49.13.72.235:22-139.178.89.65:33148.service: Deactivated successfully. Oct 8 20:20:03.588193 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 20:20:03.591727 systemd-logind[1457]: Session 13 logged out. Waiting for processes to exit. Oct 8 20:20:03.593040 systemd-logind[1457]: Removed session 13. Oct 8 20:20:03.761681 systemd[1]: Started sshd@31-49.13.72.235:22-139.178.89.65:33150.service - OpenSSH per-connection server daemon (139.178.89.65:33150). Oct 8 20:20:04.747235 sshd[5975]: Accepted publickey for core from 139.178.89.65 port 33150 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:20:04.749536 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:20:04.755785 systemd-logind[1457]: New session 14 of user core. Oct 8 20:20:04.764541 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 20:20:05.737790 sshd[5975]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:05.742118 systemd[1]: sshd@31-49.13.72.235:22-139.178.89.65:33150.service: Deactivated successfully. Oct 8 20:20:05.745586 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 20:20:05.748755 systemd-logind[1457]: Session 14 logged out. Waiting for processes to exit. Oct 8 20:20:05.750849 systemd-logind[1457]: Removed session 14. Oct 8 20:20:05.908870 systemd[1]: Started sshd@32-49.13.72.235:22-139.178.89.65:41082.service - OpenSSH per-connection server daemon (139.178.89.65:41082). Oct 8 20:20:06.883739 sshd[5991]: Accepted publickey for core from 139.178.89.65 port 41082 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:20:06.886235 sshd[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:20:06.898750 systemd-logind[1457]: New session 15 of user core. Oct 8 20:20:06.902482 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 20:20:09.486705 systemd[1]: Started sshd@33-49.13.72.235:22-121.142.87.218:39758.service - OpenSSH per-connection server daemon (121.142.87.218:39758). Oct 8 20:20:09.534016 sshd[5991]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:09.543547 systemd[1]: sshd@32-49.13.72.235:22-139.178.89.65:41082.service: Deactivated successfully. Oct 8 20:20:09.546650 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 20:20:09.547842 systemd-logind[1457]: Session 15 logged out. Waiting for processes to exit. Oct 8 20:20:09.549219 systemd-logind[1457]: Removed session 15. Oct 8 20:20:09.708690 systemd[1]: Started sshd@34-49.13.72.235:22-139.178.89.65:41094.service - OpenSSH per-connection server daemon (139.178.89.65:41094). Oct 8 20:20:10.685868 sshd[6012]: Accepted publickey for core from 139.178.89.65 port 41094 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:20:10.688543 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:20:10.702210 systemd-logind[1457]: New session 16 of user core. Oct 8 20:20:10.706728 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 20:20:11.003560 sshd[6007]: Invalid user jalali from 121.142.87.218 port 39758 Oct 8 20:20:11.291549 sshd[6007]: Received disconnect from 121.142.87.218 port 39758:11: Bye Bye [preauth] Oct 8 20:20:11.291549 sshd[6007]: Disconnected from invalid user jalali 121.142.87.218 port 39758 [preauth] Oct 8 20:20:11.296211 systemd[1]: sshd@33-49.13.72.235:22-121.142.87.218:39758.service: Deactivated successfully. Oct 8 20:20:11.569834 sshd[6012]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:11.575115 systemd[1]: sshd@34-49.13.72.235:22-139.178.89.65:41094.service: Deactivated successfully. Oct 8 20:20:11.579693 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 20:20:11.581856 systemd-logind[1457]: Session 16 logged out. Waiting for processes to exit. Oct 8 20:20:11.583196 systemd-logind[1457]: Removed session 16. Oct 8 20:20:11.748714 systemd[1]: Started sshd@35-49.13.72.235:22-139.178.89.65:41106.service - OpenSSH per-connection server daemon (139.178.89.65:41106). Oct 8 20:20:12.732815 sshd[6024]: Accepted publickey for core from 139.178.89.65 port 41106 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:20:12.735538 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:20:12.743726 systemd-logind[1457]: New session 17 of user core. Oct 8 20:20:12.748496 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 20:20:13.491702 sshd[6024]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:13.497722 systemd[1]: sshd@35-49.13.72.235:22-139.178.89.65:41106.service: Deactivated successfully. Oct 8 20:20:13.502710 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 20:20:13.505052 systemd-logind[1457]: Session 17 logged out. Waiting for processes to exit. Oct 8 20:20:13.506222 systemd-logind[1457]: Removed session 17. Oct 8 20:20:18.674948 systemd[1]: Started sshd@36-49.13.72.235:22-139.178.89.65:42284.service - OpenSSH per-connection server daemon (139.178.89.65:42284). Oct 8 20:20:19.651416 sshd[6079]: Accepted publickey for core from 139.178.89.65 port 42284 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:20:19.653616 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:20:19.662090 systemd-logind[1457]: New session 18 of user core. Oct 8 20:20:19.665496 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 20:20:20.407495 sshd[6079]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:20.413216 systemd[1]: sshd@36-49.13.72.235:22-139.178.89.65:42284.service: Deactivated successfully. Oct 8 20:20:20.417943 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 20:20:20.419577 systemd-logind[1457]: Session 18 logged out. Waiting for processes to exit. Oct 8 20:20:20.420828 systemd-logind[1457]: Removed session 18. Oct 8 20:20:25.577652 systemd[1]: Started sshd@37-49.13.72.235:22-139.178.89.65:44354.service - OpenSSH per-connection server daemon (139.178.89.65:44354). Oct 8 20:20:26.568139 sshd[6109]: Accepted publickey for core from 139.178.89.65 port 44354 ssh2: RSA SHA256:1BW9bPWBOGXMbjmDpg9OSQH3xWiUg/+uM64Z18+Rvqk Oct 8 20:20:26.568757 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 20:20:26.574588 systemd-logind[1457]: New session 19 of user core. Oct 8 20:20:26.578509 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 20:20:27.339299 sshd[6109]: pam_unix(sshd:session): session closed for user core Oct 8 20:20:27.345399 systemd[1]: sshd@37-49.13.72.235:22-139.178.89.65:44354.service: Deactivated successfully. Oct 8 20:20:27.350977 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 20:20:27.352336 systemd-logind[1457]: Session 19 logged out. Waiting for processes to exit. Oct 8 20:20:27.353292 systemd-logind[1457]: Removed session 19. Oct 8 20:20:40.647672 systemd[1]: Started sshd@38-49.13.72.235:22-27.254.149.199:44916.service - OpenSSH per-connection server daemon (27.254.149.199:44916).