Sep 4 17:17:35.287800 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:17:35.287821 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:17:35.287829 kernel: KASLR enabled Sep 4 17:17:35.287837 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 4 17:17:35.287843 kernel: printk: bootconsole [pl11] enabled Sep 4 17:17:35.287848 kernel: efi: EFI v2.7 by EDK II Sep 4 17:17:35.287855 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Sep 4 17:17:35.287861 kernel: random: crng init done Sep 4 17:17:35.287867 kernel: ACPI: Early table checksum verification disabled Sep 4 17:17:35.287873 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Sep 4 17:17:35.287879 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287885 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287892 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:17:35.287898 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287906 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287912 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287918 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287926 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287932 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287939 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 4 17:17:35.287945 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287952 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 4 17:17:35.287958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 17:17:35.287964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 4 17:17:35.287970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 4 17:17:35.287977 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 4 17:17:35.287983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 4 17:17:35.287989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 4 17:17:35.287997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 4 17:17:35.288003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 4 17:17:35.288009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 4 17:17:35.288016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 4 17:17:35.288022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 4 17:17:35.288028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 4 17:17:35.288034 kernel: NUMA: NODE_DATA [mem 0x1bf7ed800-0x1bf7f2fff] Sep 4 17:17:35.288040 kernel: Zone ranges: Sep 4 17:17:35.288047 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 4 17:17:35.288053 kernel: DMA32 empty Sep 4 17:17:35.288059 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 17:17:35.288067 kernel: Movable zone start for each node Sep 4 17:17:35.288076 kernel: Early memory node ranges Sep 4 17:17:35.288083 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 4 17:17:35.288089 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Sep 4 17:17:35.288096 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Sep 4 17:17:35.288104 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Sep 4 17:17:35.288111 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Sep 4 17:17:35.288117 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Sep 4 17:17:35.288124 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Sep 4 17:17:35.288130 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Sep 4 17:17:35.288137 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 17:17:35.288144 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 4 17:17:35.288150 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 4 17:17:35.288157 kernel: psci: probing for conduit method from ACPI. Sep 4 17:17:35.288164 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:17:35.288170 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:17:35.288177 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 4 17:17:35.288185 kernel: psci: SMC Calling Convention v1.4 Sep 4 17:17:35.288192 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 4 17:17:35.288199 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 4 17:17:35.288206 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:17:35.288212 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:17:35.288219 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:17:35.288226 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:17:35.288233 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:17:35.288239 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:17:35.288246 kernel: CPU features: detected: Spectre-BHB Sep 4 17:17:35.288267 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:17:35.288274 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:17:35.288282 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:17:35.288289 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 4 17:17:35.288296 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:17:35.288303 kernel: alternatives: applying boot alternatives Sep 4 17:17:35.288311 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:17:35.288318 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:17:35.288324 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:17:35.290366 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:17:35.290380 kernel: Fallback order for Node 0: 0 Sep 4 17:17:35.290387 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 4 17:17:35.290399 kernel: Policy zone: Normal Sep 4 17:17:35.290406 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:17:35.290413 kernel: software IO TLB: area num 2. Sep 4 17:17:35.290420 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Sep 4 17:17:35.290428 kernel: Memory: 3986328K/4194160K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 207832K reserved, 0K cma-reserved) Sep 4 17:17:35.290435 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:17:35.290441 kernel: trace event string verifier disabled Sep 4 17:17:35.290448 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:17:35.290455 kernel: rcu: RCU event tracing is enabled. Sep 4 17:17:35.290462 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:17:35.290469 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:17:35.290476 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:17:35.290484 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:17:35.290491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:17:35.290498 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:17:35.290504 kernel: GICv3: 960 SPIs implemented Sep 4 17:17:35.290511 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:17:35.290517 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:17:35.290524 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:17:35.290531 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 4 17:17:35.290537 kernel: ITS: No ITS available, not enabling LPIs Sep 4 17:17:35.290544 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:17:35.290551 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:17:35.290560 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:17:35.290566 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:17:35.290573 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:17:35.290580 kernel: Console: colour dummy device 80x25 Sep 4 17:17:35.290587 kernel: printk: console [tty1] enabled Sep 4 17:17:35.290594 kernel: ACPI: Core revision 20230628 Sep 4 17:17:35.290601 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:17:35.290608 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:17:35.290615 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:17:35.290622 kernel: SELinux: Initializing. Sep 4 17:17:35.290630 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.290637 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.290644 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:17:35.290651 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:17:35.290658 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 4 17:17:35.290665 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Sep 4 17:17:35.290672 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:17:35.290685 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:17:35.290693 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:17:35.290700 kernel: Remapping and enabling EFI services. Sep 4 17:17:35.290707 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:17:35.290716 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:17:35.290723 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 4 17:17:35.290730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:17:35.290738 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:17:35.290745 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:17:35.290752 kernel: SMP: Total of 2 processors activated. Sep 4 17:17:35.290761 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:17:35.290768 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 4 17:17:35.290775 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:17:35.290783 kernel: CPU features: detected: CRC32 instructions Sep 4 17:17:35.290790 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:17:35.290797 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:17:35.290804 kernel: CPU features: detected: Privileged Access Never Sep 4 17:17:35.290812 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:17:35.290819 kernel: alternatives: applying system-wide alternatives Sep 4 17:17:35.290827 kernel: devtmpfs: initialized Sep 4 17:17:35.290834 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:17:35.290842 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:17:35.290849 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:17:35.290856 kernel: SMBIOS 3.1.0 present. Sep 4 17:17:35.290863 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Sep 4 17:17:35.290871 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:17:35.290878 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:17:35.290887 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:17:35.290894 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:17:35.290902 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:17:35.290909 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Sep 4 17:17:35.290916 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:17:35.290923 kernel: cpuidle: using governor menu Sep 4 17:17:35.290931 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:17:35.290938 kernel: ASID allocator initialised with 32768 entries Sep 4 17:17:35.290945 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:17:35.290954 kernel: Serial: AMBA PL011 UART driver Sep 4 17:17:35.290962 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:17:35.290969 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:17:35.290976 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:17:35.290983 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:17:35.290990 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:17:35.290998 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:17:35.291005 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:17:35.291012 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:17:35.291021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:17:35.291028 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:17:35.291035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:17:35.291042 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:17:35.291050 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:17:35.291057 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:17:35.291064 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:17:35.291071 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:17:35.291079 kernel: ACPI: Interpreter enabled Sep 4 17:17:35.291086 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:17:35.291095 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:17:35.291102 kernel: printk: console [ttyAMA0] enabled Sep 4 17:17:35.291109 kernel: printk: bootconsole [pl11] disabled Sep 4 17:17:35.291116 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 4 17:17:35.291124 kernel: iommu: Default domain type: Translated Sep 4 17:17:35.291131 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:17:35.291138 kernel: efivars: Registered efivars operations Sep 4 17:17:35.291145 kernel: vgaarb: loaded Sep 4 17:17:35.291152 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:17:35.291161 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:17:35.291168 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:17:35.291175 kernel: pnp: PnP ACPI init Sep 4 17:17:35.291183 kernel: pnp: PnP ACPI: found 0 devices Sep 4 17:17:35.291190 kernel: NET: Registered PF_INET protocol family Sep 4 17:17:35.291197 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:17:35.291205 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:17:35.291212 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:17:35.291219 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:17:35.291228 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:17:35.291235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:17:35.291242 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.291250 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.291257 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:17:35.291264 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:17:35.291271 kernel: kvm [1]: HYP mode not available Sep 4 17:17:35.291278 kernel: Initialise system trusted keyrings Sep 4 17:17:35.291287 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:17:35.291294 kernel: Key type asymmetric registered Sep 4 17:17:35.291301 kernel: Asymmetric key parser 'x509' registered Sep 4 17:17:35.291309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:17:35.291316 kernel: io scheduler mq-deadline registered Sep 4 17:17:35.291323 kernel: io scheduler kyber registered Sep 4 17:17:35.291339 kernel: io scheduler bfq registered Sep 4 17:17:35.291349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:17:35.291356 kernel: thunder_xcv, ver 1.0 Sep 4 17:17:35.291363 kernel: thunder_bgx, ver 1.0 Sep 4 17:17:35.291372 kernel: nicpf, ver 1.0 Sep 4 17:17:35.291379 kernel: nicvf, ver 1.0 Sep 4 17:17:35.291522 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:17:35.291591 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:17:34 UTC (1725470254) Sep 4 17:17:35.291601 kernel: efifb: probing for efifb Sep 4 17:17:35.291609 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:17:35.291616 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:17:35.291626 kernel: efifb: scrolling: redraw Sep 4 17:17:35.291633 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:17:35.291640 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:17:35.291647 kernel: fb0: EFI VGA frame buffer device Sep 4 17:17:35.291655 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 4 17:17:35.291662 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:17:35.291669 kernel: No ACPI PMU IRQ for CPU0 Sep 4 17:17:35.291676 kernel: No ACPI PMU IRQ for CPU1 Sep 4 17:17:35.291684 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 4 17:17:35.291692 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:17:35.291699 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:17:35.291706 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:17:35.291714 kernel: Segment Routing with IPv6 Sep 4 17:17:35.291721 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:17:35.291728 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:17:35.291735 kernel: Key type dns_resolver registered Sep 4 17:17:35.291742 kernel: registered taskstats version 1 Sep 4 17:17:35.291749 kernel: Loading compiled-in X.509 certificates Sep 4 17:17:35.291756 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:17:35.291765 kernel: Key type .fscrypt registered Sep 4 17:17:35.291772 kernel: Key type fscrypt-provisioning registered Sep 4 17:17:35.291779 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:17:35.291787 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:17:35.291794 kernel: ima: No architecture policies found Sep 4 17:17:35.291801 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:17:35.291808 kernel: clk: Disabling unused clocks Sep 4 17:17:35.291815 kernel: Freeing unused kernel memory: 39040K Sep 4 17:17:35.291824 kernel: Run /init as init process Sep 4 17:17:35.291831 kernel: with arguments: Sep 4 17:17:35.291839 kernel: /init Sep 4 17:17:35.291846 kernel: with environment: Sep 4 17:17:35.291853 kernel: HOME=/ Sep 4 17:17:35.291860 kernel: TERM=linux Sep 4 17:17:35.291867 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:17:35.291876 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:17:35.291887 systemd[1]: Detected virtualization microsoft. Sep 4 17:17:35.291895 systemd[1]: Detected architecture arm64. Sep 4 17:17:35.291903 systemd[1]: Running in initrd. Sep 4 17:17:35.291910 systemd[1]: No hostname configured, using default hostname. Sep 4 17:17:35.291918 systemd[1]: Hostname set to . Sep 4 17:17:35.291926 systemd[1]: Initializing machine ID from random generator. Sep 4 17:17:35.291934 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:17:35.291942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:35.291951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:35.291959 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:17:35.291967 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:17:35.291975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:17:35.291983 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:17:35.291992 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:17:35.292000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:17:35.292010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:35.292018 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:35.292025 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:17:35.292033 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:17:35.292041 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:17:35.292049 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:17:35.292057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:17:35.292064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:17:35.292072 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:17:35.292082 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:17:35.292089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:35.292097 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:35.292106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:35.292113 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:17:35.292121 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:17:35.292129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:17:35.292137 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:17:35.292146 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:17:35.292154 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:17:35.292162 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:17:35.292186 systemd-journald[217]: Collecting audit messages is disabled. Sep 4 17:17:35.292207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:35.292216 systemd-journald[217]: Journal started Sep 4 17:17:35.292234 systemd-journald[217]: Runtime Journal (/run/log/journal/1ccd926ca29a44ba8b6e71aa34bd96ac) is 8.0M, max 78.6M, 70.6M free. Sep 4 17:17:35.304184 systemd-modules-load[218]: Inserted module 'overlay' Sep 4 17:17:35.320618 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:17:35.336976 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:17:35.356430 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:17:35.356453 kernel: Bridge firewalling registered Sep 4 17:17:35.347630 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:35.347991 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 4 17:17:35.361326 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:17:35.371856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:35.378351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:35.411695 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:35.420724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:17:35.440505 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:17:35.459312 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:17:35.470361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:35.494520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:35.502350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:17:35.526713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:17:35.535499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:17:35.552443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:17:35.574399 dracut-cmdline[250]: dracut-dracut-053 Sep 4 17:17:35.574399 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:17:35.566495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:17:35.585129 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:35.647989 systemd-resolved[264]: Positive Trust Anchors: Sep 4 17:17:35.648007 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:17:35.648039 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:17:35.651718 systemd-resolved[264]: Defaulting to hostname 'linux'. Sep 4 17:17:35.652568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:17:35.661018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:35.740349 kernel: SCSI subsystem initialized Sep 4 17:17:35.747352 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:17:35.758354 kernel: iscsi: registered transport (tcp) Sep 4 17:17:35.775943 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:17:35.775971 kernel: QLogic iSCSI HBA Driver Sep 4 17:17:35.815233 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:17:35.832632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:17:35.859742 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:17:35.859789 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:17:35.865956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:17:35.914351 kernel: raid6: neonx8 gen() 15780 MB/s Sep 4 17:17:35.934342 kernel: raid6: neonx4 gen() 15681 MB/s Sep 4 17:17:35.954341 kernel: raid6: neonx2 gen() 13253 MB/s Sep 4 17:17:35.975341 kernel: raid6: neonx1 gen() 10508 MB/s Sep 4 17:17:35.995353 kernel: raid6: int64x8 gen() 6969 MB/s Sep 4 17:17:36.015344 kernel: raid6: int64x4 gen() 7352 MB/s Sep 4 17:17:36.036345 kernel: raid6: int64x2 gen() 6130 MB/s Sep 4 17:17:36.059360 kernel: raid6: int64x1 gen() 5061 MB/s Sep 4 17:17:36.059382 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Sep 4 17:17:36.083437 kernel: raid6: .... xor() 11940 MB/s, rmw enabled Sep 4 17:17:36.083454 kernel: raid6: using neon recovery algorithm Sep 4 17:17:36.092343 kernel: xor: measuring software checksum speed Sep 4 17:17:36.096341 kernel: 8regs : 19854 MB/sec Sep 4 17:17:36.102949 kernel: 32regs : 19654 MB/sec Sep 4 17:17:36.102970 kernel: arm64_neon : 27197 MB/sec Sep 4 17:17:36.106864 kernel: xor: using function: arm64_neon (27197 MB/sec) Sep 4 17:17:36.157347 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:17:36.166784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:17:36.182459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:36.204017 systemd-udevd[437]: Using default interface naming scheme 'v255'. Sep 4 17:17:36.209320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:36.232476 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:17:36.249349 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Sep 4 17:17:36.275939 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:17:36.291529 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:17:36.330979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:36.350768 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:17:36.376563 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:17:36.389584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:17:36.403417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:36.416407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:17:36.433445 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 17:17:36.447861 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:17:36.471478 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:17:36.471500 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:17:36.471513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:17:36.464628 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:17:36.491768 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:17:36.491813 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:17:36.502552 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:17:36.493129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:17:36.515311 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:17:36.503169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:36.544912 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:17:36.545070 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:17:36.541010 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:36.579108 kernel: PTP clock support registered Sep 4 17:17:36.579128 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:17:36.579139 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:17:36.579148 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:17:36.579164 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:17:36.554561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:17:36.213174 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:17:36.225040 kernel: scsi host0: storvsc_host_t Sep 4 17:17:36.225185 kernel: scsi host1: storvsc_host_t Sep 4 17:17:36.225277 systemd-journald[217]: Time jumped backwards, rotating. Sep 4 17:17:36.225326 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:17:36.554788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:36.251945 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:17:36.252012 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: VF slot 1 added Sep 4 17:17:36.199207 systemd-resolved[264]: Clock change detected. Flushing caches. Sep 4 17:17:36.279261 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:17:36.279283 kernel: hv_pci ee16bb9d-5c6c-4a3d-929c-232d3cf69a78: PCI VMBus probing: Using version 0x10004 Sep 4 17:17:36.203062 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:36.238588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:36.261599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:36.339241 kernel: hv_pci ee16bb9d-5c6c-4a3d-929c-232d3cf69a78: PCI host bridge to bus 5c6c:00 Sep 4 17:17:36.339417 kernel: pci_bus 5c6c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 4 17:17:36.339553 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:17:36.339689 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:17:36.339701 kernel: pci_bus 5c6c:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:17:36.302406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:36.354128 kernel: pci 5c6c:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 4 17:17:36.354445 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:17:36.368135 kernel: pci 5c6c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 17:17:36.383256 kernel: pci 5c6c:00:02.0: enabling Extended Tags Sep 4 17:17:36.383317 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:17:36.387321 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:17:36.387448 kernel: pci 5c6c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5c6c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 4 17:17:36.402029 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:17:36.408630 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:17:36.408785 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:17:36.408869 kernel: pci_bus 5c6c:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:17:36.434398 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.434436 kernel: pci 5c6c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 17:17:36.434626 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:17:36.427566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:36.486337 kernel: mlx5_core 5c6c:00:02.0: enabling device (0000 -> 0002) Sep 4 17:17:36.493603 kernel: mlx5_core 5c6c:00:02.0: firmware version: 16.30.1284 Sep 4 17:17:36.593875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:17:36.625911 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:17:36.642457 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (481) Sep 4 17:17:36.642497 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (494) Sep 4 17:17:36.665256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:17:36.687342 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:17:36.693959 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:17:36.720306 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:17:36.744092 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.750091 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.757101 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.797110 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: VF registering: eth1 Sep 4 17:17:36.797284 kernel: mlx5_core 5c6c:00:02.0 eth1: joined to eth0 Sep 4 17:17:36.804016 kernel: mlx5_core 5c6c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 4 17:17:36.817101 kernel: mlx5_core 5c6c:00:02.0 enP23660s1: renamed from eth1 Sep 4 17:17:37.763942 disk-uuid[596]: The operation has completed successfully. Sep 4 17:17:37.769885 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:37.843676 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:17:37.845846 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:17:37.873277 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:17:37.886222 sh[712]: Success Sep 4 17:17:37.904498 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:17:37.971372 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:17:37.988189 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:17:37.997777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:17:38.029334 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:17:38.029385 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:38.036650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:17:38.041754 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:17:38.045922 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:17:38.118403 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:17:38.123650 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:17:38.143286 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:17:38.151222 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:17:38.186139 kernel: BTRFS info (device sda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:38.186185 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:38.190577 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:17:38.207109 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:17:38.219409 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:17:38.227349 kernel: BTRFS info (device sda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:38.233609 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:17:38.248347 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:17:38.283283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:17:38.304294 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:17:38.332522 systemd-networkd[896]: lo: Link UP Sep 4 17:17:38.332530 systemd-networkd[896]: lo: Gained carrier Sep 4 17:17:38.337462 systemd-networkd[896]: Enumeration completed Sep 4 17:17:38.337564 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:17:38.338515 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:38.338518 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:17:38.344390 systemd[1]: Reached target network.target - Network. Sep 4 17:17:38.430504 kernel: mlx5_core 5c6c:00:02.0 enP23660s1: Link up Sep 4 17:17:38.472418 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: Data path switched to VF: enP23660s1 Sep 4 17:17:38.472092 systemd-networkd[896]: enP23660s1: Link UP Sep 4 17:17:38.472180 systemd-networkd[896]: eth0: Link UP Sep 4 17:17:38.472304 systemd-networkd[896]: eth0: Gained carrier Sep 4 17:17:38.472312 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:38.496425 systemd-networkd[896]: enP23660s1: Gained carrier Sep 4 17:17:38.505134 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 17:17:38.519324 ignition[863]: Ignition 2.18.0 Sep 4 17:17:38.519335 ignition[863]: Stage: fetch-offline Sep 4 17:17:38.523444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:17:38.519371 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.519379 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.519462 ignition[863]: parsed url from cmdline: "" Sep 4 17:17:38.546352 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:17:38.519466 ignition[863]: no config URL provided Sep 4 17:17:38.519470 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:17:38.519477 ignition[863]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:17:38.519481 ignition[863]: failed to fetch config: resource requires networking Sep 4 17:17:38.519648 ignition[863]: Ignition finished successfully Sep 4 17:17:38.567607 ignition[907]: Ignition 2.18.0 Sep 4 17:17:38.567614 ignition[907]: Stage: fetch Sep 4 17:17:38.567765 ignition[907]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.567774 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.567881 ignition[907]: parsed url from cmdline: "" Sep 4 17:17:38.567884 ignition[907]: no config URL provided Sep 4 17:17:38.567889 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:17:38.567900 ignition[907]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:17:38.567920 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:17:38.662346 ignition[907]: GET result: OK Sep 4 17:17:38.662447 ignition[907]: config has been read from IMDS userdata Sep 4 17:17:38.662490 ignition[907]: parsing config with SHA512: 1e559908b7b0132618850ea957c35b7b0ffae2de4c7619573371a9aa5a059ff36ce008b3b73bc21a59817d5b62688e5c3ba4efbec847fb2b85ec48380830c103 Sep 4 17:17:38.666322 unknown[907]: fetched base config from "system" Sep 4 17:17:38.666687 ignition[907]: fetch: fetch complete Sep 4 17:17:38.666329 unknown[907]: fetched base config from "system" Sep 4 17:17:38.666691 ignition[907]: fetch: fetch passed Sep 4 17:17:38.666335 unknown[907]: fetched user config from "azure" Sep 4 17:17:38.666727 ignition[907]: Ignition finished successfully Sep 4 17:17:38.671983 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:17:38.692304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:17:38.713997 ignition[914]: Ignition 2.18.0 Sep 4 17:17:38.714004 ignition[914]: Stage: kargs Sep 4 17:17:38.724007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:17:38.714376 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.714391 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.720306 ignition[914]: kargs: kargs passed Sep 4 17:17:38.720355 ignition[914]: Ignition finished successfully Sep 4 17:17:38.752345 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:17:38.769599 ignition[921]: Ignition 2.18.0 Sep 4 17:17:38.769611 ignition[921]: Stage: disks Sep 4 17:17:38.775289 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:17:38.769761 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.782722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:17:38.769770 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.793516 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:17:38.770714 ignition[921]: disks: disks passed Sep 4 17:17:38.804842 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:17:38.770757 ignition[921]: Ignition finished successfully Sep 4 17:17:38.816101 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:17:38.827363 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:17:38.853328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:17:38.888107 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:17:38.892428 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:17:38.915241 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:17:38.970099 kernel: EXT4-fs (sda9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:17:38.971226 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:17:38.975932 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:17:39.004194 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:17:39.011187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:17:39.022229 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:17:39.045888 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Sep 4 17:17:39.036857 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:17:39.076703 kernel: BTRFS info (device sda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:39.076728 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:39.036898 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:17:39.094164 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:17:39.060177 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:17:39.089303 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:17:39.115101 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:17:39.116419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:17:39.231739 coreos-metadata[943]: Sep 04 17:17:39.231 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:17:39.241622 coreos-metadata[943]: Sep 04 17:17:39.241 INFO Fetch successful Sep 4 17:17:39.246813 coreos-metadata[943]: Sep 04 17:17:39.246 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:17:39.270111 coreos-metadata[943]: Sep 04 17:17:39.270 INFO Fetch successful Sep 4 17:17:39.275303 coreos-metadata[943]: Sep 04 17:17:39.275 INFO wrote hostname ci-3975.2.1-a-bdc284204f to /sysroot/etc/hostname Sep 4 17:17:39.284428 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:17:39.335828 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:17:39.349049 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:17:39.363583 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:17:39.373138 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:17:39.570166 systemd-networkd[896]: enP23660s1: Gained IPv6LL Sep 4 17:17:39.630625 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:17:39.648354 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:17:39.661346 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:17:39.678916 kernel: BTRFS info (device sda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:39.674467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:17:39.697368 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:17:39.710377 ignition[1061]: INFO : Ignition 2.18.0 Sep 4 17:17:39.710377 ignition[1061]: INFO : Stage: mount Sep 4 17:17:39.718990 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:39.718990 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:39.718990 ignition[1061]: INFO : mount: mount passed Sep 4 17:17:39.718990 ignition[1061]: INFO : Ignition finished successfully Sep 4 17:17:39.716582 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:17:39.740200 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:17:39.771405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:17:39.792099 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1071) Sep 4 17:17:39.804974 kernel: BTRFS info (device sda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:39.805001 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:39.809217 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:17:39.816101 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:17:39.817935 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:17:39.842798 ignition[1089]: INFO : Ignition 2.18.0 Sep 4 17:17:39.842798 ignition[1089]: INFO : Stage: files Sep 4 17:17:39.842798 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:39.842798 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:39.842798 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:17:39.868806 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:17:39.868806 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:17:39.883566 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:17:39.890876 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:17:39.898147 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:17:39.897938 unknown[1089]: wrote ssh authorized keys file for user: core Sep 4 17:17:39.910832 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:17:39.910832 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:17:40.041543 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:17:40.210196 systemd-networkd[896]: eth0: Gained IPv6LL Sep 4 17:17:40.227306 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Sep 4 17:17:40.514174 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:17:40.756857 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.756857 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: files passed Sep 4 17:17:40.817903 ignition[1089]: INFO : Ignition finished successfully Sep 4 17:17:40.797594 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:17:40.844381 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:17:40.857248 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:17:40.885788 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:17:40.885878 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:17:40.905652 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:40.905652 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:40.922986 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:40.923114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:17:40.945172 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:17:40.959359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:17:41.002001 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:17:41.004196 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:17:41.014543 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:17:41.026531 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:17:41.037639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:17:41.040268 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:17:41.086832 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:17:41.105358 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:17:41.125825 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:17:41.125935 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:17:41.139487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:41.150195 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:41.162642 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:17:41.173610 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:17:41.173691 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:17:41.189876 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:17:41.201816 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:17:41.212442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:17:41.223127 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:17:41.235146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:17:41.247227 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:17:41.258724 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:17:41.270766 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:17:41.283827 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:17:41.296508 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:17:41.306255 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:17:41.306329 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:17:41.321094 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:41.327147 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:41.339311 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:17:41.339353 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:41.351542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:17:41.351613 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:17:41.368515 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:17:41.368560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:17:41.375312 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:17:41.375351 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:17:41.387976 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:17:41.450137 ignition[1142]: INFO : Ignition 2.18.0 Sep 4 17:17:41.450137 ignition[1142]: INFO : Stage: umount Sep 4 17:17:41.450137 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:41.450137 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:41.450137 ignition[1142]: INFO : umount: umount passed Sep 4 17:17:41.450137 ignition[1142]: INFO : Ignition finished successfully Sep 4 17:17:41.388016 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:17:41.419238 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:17:41.434725 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:17:41.434805 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:41.445202 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:17:41.455886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:17:41.455945 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:41.465700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:17:41.465745 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:17:41.488913 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:17:41.489022 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:17:41.497511 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:17:41.497565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:17:41.510569 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:17:41.510625 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:17:41.521828 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:17:41.521873 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:17:41.536663 systemd[1]: Stopped target network.target - Network. Sep 4 17:17:41.553178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:17:41.553252 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:17:41.564025 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:17:41.573921 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:17:41.585104 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:41.597906 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:17:41.608104 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:17:41.619228 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:17:41.619281 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:17:41.635215 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:17:41.635267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:17:41.645511 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:17:41.645590 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:17:41.651112 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:17:41.651154 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:17:41.666839 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:17:41.676669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:17:41.687127 systemd-networkd[896]: eth0: DHCPv6 lease lost Sep 4 17:17:41.688179 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:17:41.692370 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:17:41.692485 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:17:41.709991 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:17:41.710132 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:17:41.722561 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:17:41.892890 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: Data path switched from VF: enP23660s1 Sep 4 17:17:41.722625 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:41.749261 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:17:41.760550 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:17:41.760625 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:17:41.772105 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:17:41.772153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:41.782803 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:17:41.782844 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:41.793270 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:17:41.793311 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:17:41.805796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:41.850768 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:17:41.850964 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:41.862664 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:17:41.862709 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:41.874908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:17:41.874951 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:41.898626 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:17:41.898685 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:17:41.914873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:17:41.914925 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:17:41.932655 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:17:41.932712 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:41.967354 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:17:41.984219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:17:41.984289 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:41.998192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:17:41.998241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:42.009265 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:17:42.009369 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:17:42.026937 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:17:42.027030 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:17:42.115428 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:17:42.115799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:17:42.127942 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:17:42.138292 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:17:42.138364 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:17:42.165307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:17:42.406019 systemd[1]: Switching root. Sep 4 17:17:42.440690 systemd-journald[217]: Journal stopped Sep 4 17:17:35.287800 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 4 17:17:35.287821 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:17:35.287829 kernel: KASLR enabled Sep 4 17:17:35.287837 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Sep 4 17:17:35.287843 kernel: printk: bootconsole [pl11] enabled Sep 4 17:17:35.287848 kernel: efi: EFI v2.7 by EDK II Sep 4 17:17:35.287855 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3e198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Sep 4 17:17:35.287861 kernel: random: crng init done Sep 4 17:17:35.287867 kernel: ACPI: Early table checksum verification disabled Sep 4 17:17:35.287873 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Sep 4 17:17:35.287879 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287885 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287892 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Sep 4 17:17:35.287898 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287906 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287912 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287918 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287926 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287932 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287939 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Sep 4 17:17:35.287945 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Sep 4 17:17:35.287952 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Sep 4 17:17:35.287958 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Sep 4 17:17:35.287964 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Sep 4 17:17:35.287970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Sep 4 17:17:35.287977 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Sep 4 17:17:35.287983 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Sep 4 17:17:35.287989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Sep 4 17:17:35.287997 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Sep 4 17:17:35.288003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Sep 4 17:17:35.288009 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Sep 4 17:17:35.288016 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Sep 4 17:17:35.288022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Sep 4 17:17:35.288028 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Sep 4 17:17:35.288034 kernel: NUMA: NODE_DATA [mem 0x1bf7ed800-0x1bf7f2fff] Sep 4 17:17:35.288040 kernel: Zone ranges: Sep 4 17:17:35.288047 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Sep 4 17:17:35.288053 kernel: DMA32 empty Sep 4 17:17:35.288059 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 17:17:35.288067 kernel: Movable zone start for each node Sep 4 17:17:35.288076 kernel: Early memory node ranges Sep 4 17:17:35.288083 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Sep 4 17:17:35.288089 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Sep 4 17:17:35.288096 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Sep 4 17:17:35.288104 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Sep 4 17:17:35.288111 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Sep 4 17:17:35.288117 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Sep 4 17:17:35.288124 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Sep 4 17:17:35.288130 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Sep 4 17:17:35.288137 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Sep 4 17:17:35.288144 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Sep 4 17:17:35.288150 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Sep 4 17:17:35.288157 kernel: psci: probing for conduit method from ACPI. Sep 4 17:17:35.288164 kernel: psci: PSCIv1.1 detected in firmware. Sep 4 17:17:35.288170 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:17:35.288177 kernel: psci: MIGRATE_INFO_TYPE not supported. Sep 4 17:17:35.288185 kernel: psci: SMC Calling Convention v1.4 Sep 4 17:17:35.288192 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Sep 4 17:17:35.288199 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Sep 4 17:17:35.288206 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:17:35.288212 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:17:35.288219 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:17:35.288226 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:17:35.288233 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:17:35.288239 kernel: CPU features: detected: Hardware dirty bit management Sep 4 17:17:35.288246 kernel: CPU features: detected: Spectre-BHB Sep 4 17:17:35.288267 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 4 17:17:35.288274 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 4 17:17:35.288282 kernel: CPU features: detected: ARM erratum 1418040 Sep 4 17:17:35.288289 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Sep 4 17:17:35.288296 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 4 17:17:35.288303 kernel: alternatives: applying boot alternatives Sep 4 17:17:35.288311 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:17:35.288318 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:17:35.288324 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:17:35.290366 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:17:35.290380 kernel: Fallback order for Node 0: 0 Sep 4 17:17:35.290387 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Sep 4 17:17:35.290399 kernel: Policy zone: Normal Sep 4 17:17:35.290406 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:17:35.290413 kernel: software IO TLB: area num 2. Sep 4 17:17:35.290420 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Sep 4 17:17:35.290428 kernel: Memory: 3986328K/4194160K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 207832K reserved, 0K cma-reserved) Sep 4 17:17:35.290435 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:17:35.290441 kernel: trace event string verifier disabled Sep 4 17:17:35.290448 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:17:35.290455 kernel: rcu: RCU event tracing is enabled. Sep 4 17:17:35.290462 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:17:35.290469 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:17:35.290476 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:17:35.290484 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:17:35.290491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:17:35.290498 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:17:35.290504 kernel: GICv3: 960 SPIs implemented Sep 4 17:17:35.290511 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:17:35.290517 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:17:35.290524 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 4 17:17:35.290531 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Sep 4 17:17:35.290537 kernel: ITS: No ITS available, not enabling LPIs Sep 4 17:17:35.290544 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:17:35.290551 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:17:35.290560 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 4 17:17:35.290566 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 4 17:17:35.290573 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 4 17:17:35.290580 kernel: Console: colour dummy device 80x25 Sep 4 17:17:35.290587 kernel: printk: console [tty1] enabled Sep 4 17:17:35.290594 kernel: ACPI: Core revision 20230628 Sep 4 17:17:35.290601 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 4 17:17:35.290608 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:17:35.290615 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:17:35.290622 kernel: SELinux: Initializing. Sep 4 17:17:35.290630 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.290637 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.290644 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:17:35.290651 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:17:35.290658 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Sep 4 17:17:35.290665 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Sep 4 17:17:35.290672 kernel: Hyper-V: enabling crash_kexec_post_notifiers Sep 4 17:17:35.290685 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:17:35.290693 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:17:35.290700 kernel: Remapping and enabling EFI services. Sep 4 17:17:35.290707 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:17:35.290716 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:17:35.290723 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Sep 4 17:17:35.290730 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 4 17:17:35.290738 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 4 17:17:35.290745 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:17:35.290752 kernel: SMP: Total of 2 processors activated. Sep 4 17:17:35.290761 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:17:35.290768 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Sep 4 17:17:35.290775 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 4 17:17:35.290783 kernel: CPU features: detected: CRC32 instructions Sep 4 17:17:35.290790 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 4 17:17:35.290797 kernel: CPU features: detected: LSE atomic instructions Sep 4 17:17:35.290804 kernel: CPU features: detected: Privileged Access Never Sep 4 17:17:35.290812 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:17:35.290819 kernel: alternatives: applying system-wide alternatives Sep 4 17:17:35.290827 kernel: devtmpfs: initialized Sep 4 17:17:35.290834 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:17:35.290842 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:17:35.290849 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:17:35.290856 kernel: SMBIOS 3.1.0 present. Sep 4 17:17:35.290863 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Sep 4 17:17:35.290871 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:17:35.290878 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:17:35.290887 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:17:35.290894 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:17:35.290902 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:17:35.290909 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Sep 4 17:17:35.290916 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:17:35.290923 kernel: cpuidle: using governor menu Sep 4 17:17:35.290931 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:17:35.290938 kernel: ASID allocator initialised with 32768 entries Sep 4 17:17:35.290945 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:17:35.290954 kernel: Serial: AMBA PL011 UART driver Sep 4 17:17:35.290962 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 4 17:17:35.290969 kernel: Modules: 0 pages in range for non-PLT usage Sep 4 17:17:35.290976 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:17:35.290983 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:17:35.290990 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:17:35.290998 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:17:35.291005 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:17:35.291012 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:17:35.291021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:17:35.291028 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:17:35.291035 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:17:35.291042 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:17:35.291050 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:17:35.291057 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:17:35.291064 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:17:35.291071 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:17:35.291079 kernel: ACPI: Interpreter enabled Sep 4 17:17:35.291086 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:17:35.291095 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Sep 4 17:17:35.291102 kernel: printk: console [ttyAMA0] enabled Sep 4 17:17:35.291109 kernel: printk: bootconsole [pl11] disabled Sep 4 17:17:35.291116 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Sep 4 17:17:35.291124 kernel: iommu: Default domain type: Translated Sep 4 17:17:35.291131 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:17:35.291138 kernel: efivars: Registered efivars operations Sep 4 17:17:35.291145 kernel: vgaarb: loaded Sep 4 17:17:35.291152 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:17:35.291161 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:17:35.291168 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:17:35.291175 kernel: pnp: PnP ACPI init Sep 4 17:17:35.291183 kernel: pnp: PnP ACPI: found 0 devices Sep 4 17:17:35.291190 kernel: NET: Registered PF_INET protocol family Sep 4 17:17:35.291197 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:17:35.291205 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:17:35.291212 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:17:35.291219 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:17:35.291228 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:17:35.291235 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:17:35.291242 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.291250 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:17:35.291257 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:17:35.291264 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:17:35.291271 kernel: kvm [1]: HYP mode not available Sep 4 17:17:35.291278 kernel: Initialise system trusted keyrings Sep 4 17:17:35.291287 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:17:35.291294 kernel: Key type asymmetric registered Sep 4 17:17:35.291301 kernel: Asymmetric key parser 'x509' registered Sep 4 17:17:35.291309 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:17:35.291316 kernel: io scheduler mq-deadline registered Sep 4 17:17:35.291323 kernel: io scheduler kyber registered Sep 4 17:17:35.291339 kernel: io scheduler bfq registered Sep 4 17:17:35.291349 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:17:35.291356 kernel: thunder_xcv, ver 1.0 Sep 4 17:17:35.291363 kernel: thunder_bgx, ver 1.0 Sep 4 17:17:35.291372 kernel: nicpf, ver 1.0 Sep 4 17:17:35.291379 kernel: nicvf, ver 1.0 Sep 4 17:17:35.291522 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:17:35.291591 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:17:34 UTC (1725470254) Sep 4 17:17:35.291601 kernel: efifb: probing for efifb Sep 4 17:17:35.291609 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Sep 4 17:17:35.291616 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Sep 4 17:17:35.291626 kernel: efifb: scrolling: redraw Sep 4 17:17:35.291633 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 4 17:17:35.291640 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:17:35.291647 kernel: fb0: EFI VGA frame buffer device Sep 4 17:17:35.291655 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Sep 4 17:17:35.291662 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:17:35.291669 kernel: No ACPI PMU IRQ for CPU0 Sep 4 17:17:35.291676 kernel: No ACPI PMU IRQ for CPU1 Sep 4 17:17:35.291684 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Sep 4 17:17:35.291692 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:17:35.291699 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:17:35.291706 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:17:35.291714 kernel: Segment Routing with IPv6 Sep 4 17:17:35.291721 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:17:35.291728 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:17:35.291735 kernel: Key type dns_resolver registered Sep 4 17:17:35.291742 kernel: registered taskstats version 1 Sep 4 17:17:35.291749 kernel: Loading compiled-in X.509 certificates Sep 4 17:17:35.291756 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:17:35.291765 kernel: Key type .fscrypt registered Sep 4 17:17:35.291772 kernel: Key type fscrypt-provisioning registered Sep 4 17:17:35.291779 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:17:35.291787 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:17:35.291794 kernel: ima: No architecture policies found Sep 4 17:17:35.291801 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:17:35.291808 kernel: clk: Disabling unused clocks Sep 4 17:17:35.291815 kernel: Freeing unused kernel memory: 39040K Sep 4 17:17:35.291824 kernel: Run /init as init process Sep 4 17:17:35.291831 kernel: with arguments: Sep 4 17:17:35.291839 kernel: /init Sep 4 17:17:35.291846 kernel: with environment: Sep 4 17:17:35.291853 kernel: HOME=/ Sep 4 17:17:35.291860 kernel: TERM=linux Sep 4 17:17:35.291867 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:17:35.291876 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:17:35.291887 systemd[1]: Detected virtualization microsoft. Sep 4 17:17:35.291895 systemd[1]: Detected architecture arm64. Sep 4 17:17:35.291903 systemd[1]: Running in initrd. Sep 4 17:17:35.291910 systemd[1]: No hostname configured, using default hostname. Sep 4 17:17:35.291918 systemd[1]: Hostname set to . Sep 4 17:17:35.291926 systemd[1]: Initializing machine ID from random generator. Sep 4 17:17:35.291934 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:17:35.291942 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:35.291951 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:35.291959 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:17:35.291967 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:17:35.291975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:17:35.291983 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:17:35.291992 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:17:35.292000 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:17:35.292010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:35.292018 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:35.292025 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:17:35.292033 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:17:35.292041 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:17:35.292049 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:17:35.292057 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:17:35.292064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:17:35.292072 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:17:35.292082 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:17:35.292089 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:35.292097 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:35.292106 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:35.292113 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:17:35.292121 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:17:35.292129 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:17:35.292137 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:17:35.292146 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:17:35.292154 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:17:35.292162 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:17:35.292186 systemd-journald[217]: Collecting audit messages is disabled. Sep 4 17:17:35.292207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:35.292216 systemd-journald[217]: Journal started Sep 4 17:17:35.292234 systemd-journald[217]: Runtime Journal (/run/log/journal/1ccd926ca29a44ba8b6e71aa34bd96ac) is 8.0M, max 78.6M, 70.6M free. Sep 4 17:17:35.304184 systemd-modules-load[218]: Inserted module 'overlay' Sep 4 17:17:35.320618 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:17:35.336976 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:17:35.356430 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:17:35.356453 kernel: Bridge firewalling registered Sep 4 17:17:35.347630 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:35.347991 systemd-modules-load[218]: Inserted module 'br_netfilter' Sep 4 17:17:35.361326 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:17:35.371856 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:35.378351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:35.411695 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:35.420724 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:17:35.440505 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:17:35.459312 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:17:35.470361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:35.494520 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:35.502350 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:17:35.526713 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:17:35.535499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:17:35.552443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:17:35.574399 dracut-cmdline[250]: dracut-dracut-053 Sep 4 17:17:35.574399 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:17:35.566495 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:17:35.585129 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:35.647989 systemd-resolved[264]: Positive Trust Anchors: Sep 4 17:17:35.648007 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:17:35.648039 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:17:35.651718 systemd-resolved[264]: Defaulting to hostname 'linux'. Sep 4 17:17:35.652568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:17:35.661018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:35.740349 kernel: SCSI subsystem initialized Sep 4 17:17:35.747352 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:17:35.758354 kernel: iscsi: registered transport (tcp) Sep 4 17:17:35.775943 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:17:35.775971 kernel: QLogic iSCSI HBA Driver Sep 4 17:17:35.815233 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:17:35.832632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:17:35.859742 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:17:35.859789 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:17:35.865956 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:17:35.914351 kernel: raid6: neonx8 gen() 15780 MB/s Sep 4 17:17:35.934342 kernel: raid6: neonx4 gen() 15681 MB/s Sep 4 17:17:35.954341 kernel: raid6: neonx2 gen() 13253 MB/s Sep 4 17:17:35.975341 kernel: raid6: neonx1 gen() 10508 MB/s Sep 4 17:17:35.995353 kernel: raid6: int64x8 gen() 6969 MB/s Sep 4 17:17:36.015344 kernel: raid6: int64x4 gen() 7352 MB/s Sep 4 17:17:36.036345 kernel: raid6: int64x2 gen() 6130 MB/s Sep 4 17:17:36.059360 kernel: raid6: int64x1 gen() 5061 MB/s Sep 4 17:17:36.059382 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Sep 4 17:17:36.083437 kernel: raid6: .... xor() 11940 MB/s, rmw enabled Sep 4 17:17:36.083454 kernel: raid6: using neon recovery algorithm Sep 4 17:17:36.092343 kernel: xor: measuring software checksum speed Sep 4 17:17:36.096341 kernel: 8regs : 19854 MB/sec Sep 4 17:17:36.102949 kernel: 32regs : 19654 MB/sec Sep 4 17:17:36.102970 kernel: arm64_neon : 27197 MB/sec Sep 4 17:17:36.106864 kernel: xor: using function: arm64_neon (27197 MB/sec) Sep 4 17:17:36.157347 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:17:36.166784 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:17:36.182459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:36.204017 systemd-udevd[437]: Using default interface naming scheme 'v255'. Sep 4 17:17:36.209320 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:36.232476 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:17:36.249349 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Sep 4 17:17:36.275939 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:17:36.291529 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:17:36.330979 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:36.350768 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:17:36.376563 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:17:36.389584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:17:36.403417 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:36.416407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:17:36.433445 kernel: hv_vmbus: Vmbus version:5.3 Sep 4 17:17:36.447861 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:17:36.471478 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 4 17:17:36.471500 kernel: hv_vmbus: registering driver hid_hyperv Sep 4 17:17:36.471513 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 4 17:17:36.464628 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:17:36.491768 kernel: hv_vmbus: registering driver hv_netvsc Sep 4 17:17:36.491813 kernel: hv_vmbus: registering driver hyperv_keyboard Sep 4 17:17:36.502552 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Sep 4 17:17:36.493129 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:17:36.515311 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Sep 4 17:17:36.503169 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:36.544912 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Sep 4 17:17:36.545070 kernel: hv_vmbus: registering driver hv_storvsc Sep 4 17:17:36.541010 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:36.579108 kernel: PTP clock support registered Sep 4 17:17:36.579128 kernel: hv_utils: Registering HyperV Utility Driver Sep 4 17:17:36.579139 kernel: hv_vmbus: registering driver hv_utils Sep 4 17:17:36.579148 kernel: hv_utils: Heartbeat IC version 3.0 Sep 4 17:17:36.579164 kernel: hv_utils: Shutdown IC version 3.2 Sep 4 17:17:36.554561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:17:36.213174 kernel: hv_utils: TimeSync IC version 4.0 Sep 4 17:17:36.225040 kernel: scsi host0: storvsc_host_t Sep 4 17:17:36.225185 kernel: scsi host1: storvsc_host_t Sep 4 17:17:36.225277 systemd-journald[217]: Time jumped backwards, rotating. Sep 4 17:17:36.225326 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Sep 4 17:17:36.554788 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:36.251945 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Sep 4 17:17:36.252012 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: VF slot 1 added Sep 4 17:17:36.199207 systemd-resolved[264]: Clock change detected. Flushing caches. Sep 4 17:17:36.279261 kernel: hv_vmbus: registering driver hv_pci Sep 4 17:17:36.279283 kernel: hv_pci ee16bb9d-5c6c-4a3d-929c-232d3cf69a78: PCI VMBus probing: Using version 0x10004 Sep 4 17:17:36.203062 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:36.238588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:36.261599 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:36.339241 kernel: hv_pci ee16bb9d-5c6c-4a3d-929c-232d3cf69a78: PCI host bridge to bus 5c6c:00 Sep 4 17:17:36.339417 kernel: pci_bus 5c6c:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Sep 4 17:17:36.339553 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Sep 4 17:17:36.339689 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 4 17:17:36.339701 kernel: pci_bus 5c6c:00: No busn resource found for root bus, will use [bus 00-ff] Sep 4 17:17:36.302406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:17:36.354128 kernel: pci 5c6c:00:02.0: [15b3:1018] type 00 class 0x020000 Sep 4 17:17:36.354445 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Sep 4 17:17:36.368135 kernel: pci 5c6c:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 17:17:36.383256 kernel: pci 5c6c:00:02.0: enabling Extended Tags Sep 4 17:17:36.383317 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Sep 4 17:17:36.387321 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Sep 4 17:17:36.387448 kernel: pci 5c6c:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5c6c:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Sep 4 17:17:36.402029 kernel: sd 0:0:0:0: [sda] Write Protect is off Sep 4 17:17:36.408630 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Sep 4 17:17:36.408785 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Sep 4 17:17:36.408869 kernel: pci_bus 5c6c:00: busn_res: [bus 00-ff] end is updated to 00 Sep 4 17:17:36.434398 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.434436 kernel: pci 5c6c:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Sep 4 17:17:36.434626 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Sep 4 17:17:36.427566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:36.486337 kernel: mlx5_core 5c6c:00:02.0: enabling device (0000 -> 0002) Sep 4 17:17:36.493603 kernel: mlx5_core 5c6c:00:02.0: firmware version: 16.30.1284 Sep 4 17:17:36.593875 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Sep 4 17:17:36.625911 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Sep 4 17:17:36.642457 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (481) Sep 4 17:17:36.642497 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (494) Sep 4 17:17:36.665256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:17:36.687342 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Sep 4 17:17:36.693959 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Sep 4 17:17:36.720306 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:17:36.744092 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.750091 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.757101 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:36.797110 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: VF registering: eth1 Sep 4 17:17:36.797284 kernel: mlx5_core 5c6c:00:02.0 eth1: joined to eth0 Sep 4 17:17:36.804016 kernel: mlx5_core 5c6c:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Sep 4 17:17:36.817101 kernel: mlx5_core 5c6c:00:02.0 enP23660s1: renamed from eth1 Sep 4 17:17:37.763942 disk-uuid[596]: The operation has completed successfully. Sep 4 17:17:37.769885 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Sep 4 17:17:37.843676 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:17:37.845846 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:17:37.873277 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:17:37.886222 sh[712]: Success Sep 4 17:17:37.904498 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:17:37.971372 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:17:37.988189 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:17:37.997777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:17:38.029334 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:17:38.029385 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:38.036650 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:17:38.041754 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:17:38.045922 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:17:38.118403 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:17:38.123650 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:17:38.143286 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:17:38.151222 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:17:38.186139 kernel: BTRFS info (device sda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:38.186185 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:38.190577 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:17:38.207109 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:17:38.219409 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:17:38.227349 kernel: BTRFS info (device sda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:38.233609 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:17:38.248347 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:17:38.283283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:17:38.304294 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:17:38.332522 systemd-networkd[896]: lo: Link UP Sep 4 17:17:38.332530 systemd-networkd[896]: lo: Gained carrier Sep 4 17:17:38.337462 systemd-networkd[896]: Enumeration completed Sep 4 17:17:38.337564 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:17:38.338515 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:38.338518 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:17:38.344390 systemd[1]: Reached target network.target - Network. Sep 4 17:17:38.430504 kernel: mlx5_core 5c6c:00:02.0 enP23660s1: Link up Sep 4 17:17:38.472418 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: Data path switched to VF: enP23660s1 Sep 4 17:17:38.472092 systemd-networkd[896]: enP23660s1: Link UP Sep 4 17:17:38.472180 systemd-networkd[896]: eth0: Link UP Sep 4 17:17:38.472304 systemd-networkd[896]: eth0: Gained carrier Sep 4 17:17:38.472312 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:38.496425 systemd-networkd[896]: enP23660s1: Gained carrier Sep 4 17:17:38.505134 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 17:17:38.519324 ignition[863]: Ignition 2.18.0 Sep 4 17:17:38.519335 ignition[863]: Stage: fetch-offline Sep 4 17:17:38.523444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:17:38.519371 ignition[863]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.519379 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.519462 ignition[863]: parsed url from cmdline: "" Sep 4 17:17:38.546352 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:17:38.519466 ignition[863]: no config URL provided Sep 4 17:17:38.519470 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:17:38.519477 ignition[863]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:17:38.519481 ignition[863]: failed to fetch config: resource requires networking Sep 4 17:17:38.519648 ignition[863]: Ignition finished successfully Sep 4 17:17:38.567607 ignition[907]: Ignition 2.18.0 Sep 4 17:17:38.567614 ignition[907]: Stage: fetch Sep 4 17:17:38.567765 ignition[907]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.567774 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.567881 ignition[907]: parsed url from cmdline: "" Sep 4 17:17:38.567884 ignition[907]: no config URL provided Sep 4 17:17:38.567889 ignition[907]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:17:38.567900 ignition[907]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:17:38.567920 ignition[907]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Sep 4 17:17:38.662346 ignition[907]: GET result: OK Sep 4 17:17:38.662447 ignition[907]: config has been read from IMDS userdata Sep 4 17:17:38.662490 ignition[907]: parsing config with SHA512: 1e559908b7b0132618850ea957c35b7b0ffae2de4c7619573371a9aa5a059ff36ce008b3b73bc21a59817d5b62688e5c3ba4efbec847fb2b85ec48380830c103 Sep 4 17:17:38.666322 unknown[907]: fetched base config from "system" Sep 4 17:17:38.666687 ignition[907]: fetch: fetch complete Sep 4 17:17:38.666329 unknown[907]: fetched base config from "system" Sep 4 17:17:38.666691 ignition[907]: fetch: fetch passed Sep 4 17:17:38.666335 unknown[907]: fetched user config from "azure" Sep 4 17:17:38.666727 ignition[907]: Ignition finished successfully Sep 4 17:17:38.671983 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:17:38.692304 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:17:38.713997 ignition[914]: Ignition 2.18.0 Sep 4 17:17:38.714004 ignition[914]: Stage: kargs Sep 4 17:17:38.724007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:17:38.714376 ignition[914]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.714391 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.720306 ignition[914]: kargs: kargs passed Sep 4 17:17:38.720355 ignition[914]: Ignition finished successfully Sep 4 17:17:38.752345 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:17:38.769599 ignition[921]: Ignition 2.18.0 Sep 4 17:17:38.769611 ignition[921]: Stage: disks Sep 4 17:17:38.775289 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:17:38.769761 ignition[921]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:38.782722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:17:38.769770 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:38.793516 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:17:38.770714 ignition[921]: disks: disks passed Sep 4 17:17:38.804842 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:17:38.770757 ignition[921]: Ignition finished successfully Sep 4 17:17:38.816101 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:17:38.827363 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:17:38.853328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:17:38.888107 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Sep 4 17:17:38.892428 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:17:38.915241 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:17:38.970099 kernel: EXT4-fs (sda9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:17:38.971226 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:17:38.975932 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:17:39.004194 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:17:39.011187 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:17:39.022229 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 4 17:17:39.045888 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Sep 4 17:17:39.036857 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:17:39.076703 kernel: BTRFS info (device sda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:39.076728 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:39.036898 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:17:39.094164 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:17:39.060177 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:17:39.089303 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:17:39.115101 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:17:39.116419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:17:39.231739 coreos-metadata[943]: Sep 04 17:17:39.231 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:17:39.241622 coreos-metadata[943]: Sep 04 17:17:39.241 INFO Fetch successful Sep 4 17:17:39.246813 coreos-metadata[943]: Sep 04 17:17:39.246 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:17:39.270111 coreos-metadata[943]: Sep 04 17:17:39.270 INFO Fetch successful Sep 4 17:17:39.275303 coreos-metadata[943]: Sep 04 17:17:39.275 INFO wrote hostname ci-3975.2.1-a-bdc284204f to /sysroot/etc/hostname Sep 4 17:17:39.284428 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:17:39.335828 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:17:39.349049 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:17:39.363583 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:17:39.373138 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:17:39.570166 systemd-networkd[896]: enP23660s1: Gained IPv6LL Sep 4 17:17:39.630625 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:17:39.648354 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:17:39.661346 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:17:39.678916 kernel: BTRFS info (device sda6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:39.674467 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:17:39.697368 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:17:39.710377 ignition[1061]: INFO : Ignition 2.18.0 Sep 4 17:17:39.710377 ignition[1061]: INFO : Stage: mount Sep 4 17:17:39.718990 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:39.718990 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:39.718990 ignition[1061]: INFO : mount: mount passed Sep 4 17:17:39.718990 ignition[1061]: INFO : Ignition finished successfully Sep 4 17:17:39.716582 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:17:39.740200 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:17:39.771405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:17:39.792099 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1071) Sep 4 17:17:39.804974 kernel: BTRFS info (device sda6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:17:39.805001 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:17:39.809217 kernel: BTRFS info (device sda6): using free space tree Sep 4 17:17:39.816101 kernel: BTRFS info (device sda6): auto enabling async discard Sep 4 17:17:39.817935 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:17:39.842798 ignition[1089]: INFO : Ignition 2.18.0 Sep 4 17:17:39.842798 ignition[1089]: INFO : Stage: files Sep 4 17:17:39.842798 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:39.842798 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:39.842798 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:17:39.868806 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:17:39.868806 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:17:39.883566 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:17:39.890876 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:17:39.898147 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:17:39.897938 unknown[1089]: wrote ssh authorized keys file for user: core Sep 4 17:17:39.910832 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:17:39.910832 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:17:40.041543 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:17:40.210196 systemd-networkd[896]: eth0: Gained IPv6LL Sep 4 17:17:40.227306 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.238102 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Sep 4 17:17:40.514174 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:17:40.756857 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:17:40.756857 ignition[1089]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:17:40.776603 ignition[1089]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:17:40.817903 ignition[1089]: INFO : files: files passed Sep 4 17:17:40.817903 ignition[1089]: INFO : Ignition finished successfully Sep 4 17:17:40.797594 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:17:40.844381 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:17:40.857248 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:17:40.885788 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:17:40.885878 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:17:40.905652 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:40.905652 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:40.922986 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:17:40.923114 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:17:40.945172 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:17:40.959359 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:17:41.002001 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:17:41.004196 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:17:41.014543 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:17:41.026531 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:17:41.037639 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:17:41.040268 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:17:41.086832 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:17:41.105358 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:17:41.125825 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:17:41.125935 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:17:41.139487 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:41.150195 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:41.162642 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:17:41.173610 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:17:41.173691 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:17:41.189876 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:17:41.201816 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:17:41.212442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:17:41.223127 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:17:41.235146 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:17:41.247227 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:17:41.258724 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:17:41.270766 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:17:41.283827 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:17:41.296508 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:17:41.306255 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:17:41.306329 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:17:41.321094 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:41.327147 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:41.339311 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:17:41.339353 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:41.351542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:17:41.351613 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:17:41.368515 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:17:41.368560 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:17:41.375312 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:17:41.375351 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:17:41.387976 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 4 17:17:41.450137 ignition[1142]: INFO : Ignition 2.18.0 Sep 4 17:17:41.450137 ignition[1142]: INFO : Stage: umount Sep 4 17:17:41.450137 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:17:41.450137 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Sep 4 17:17:41.450137 ignition[1142]: INFO : umount: umount passed Sep 4 17:17:41.450137 ignition[1142]: INFO : Ignition finished successfully Sep 4 17:17:41.388016 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 4 17:17:41.419238 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:17:41.434725 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:17:41.434805 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:41.445202 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:17:41.455886 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:17:41.455945 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:41.465700 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:17:41.465745 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:17:41.488913 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:17:41.489022 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:17:41.497511 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:17:41.497565 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:17:41.510569 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:17:41.510625 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:17:41.521828 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:17:41.521873 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:17:41.536663 systemd[1]: Stopped target network.target - Network. Sep 4 17:17:41.553178 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:17:41.553252 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:17:41.564025 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:17:41.573921 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:17:41.585104 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:41.597906 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:17:41.608104 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:17:41.619228 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:17:41.619281 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:17:41.635215 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:17:41.635267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:17:41.645511 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:17:41.645590 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:17:41.651112 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:17:41.651154 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:17:41.666839 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:17:41.676669 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:17:41.687127 systemd-networkd[896]: eth0: DHCPv6 lease lost Sep 4 17:17:41.688179 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:17:41.692370 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:17:41.692485 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:17:41.709991 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:17:41.710132 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:17:41.722561 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:17:41.892890 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: Data path switched from VF: enP23660s1 Sep 4 17:17:41.722625 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:41.749261 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:17:41.760550 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:17:41.760625 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:17:41.772105 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:17:41.772153 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:41.782803 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:17:41.782844 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:41.793270 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:17:41.793311 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:17:41.805796 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:41.850768 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:17:41.850964 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:41.862664 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:17:41.862709 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:41.874908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:17:41.874951 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:41.898626 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:17:41.898685 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:17:41.914873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:17:41.914925 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:17:41.932655 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:17:41.932712 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:17:41.967354 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:17:41.984219 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:17:41.984289 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:41.998192 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:17:41.998241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:42.009265 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:17:42.009369 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:17:42.026937 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:17:42.027030 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:17:42.115428 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:17:42.115799 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:17:42.127942 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:17:42.138292 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:17:42.138364 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:17:42.165307 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:17:42.406019 systemd[1]: Switching root. Sep 4 17:17:42.440690 systemd-journald[217]: Journal stopped Sep 4 17:17:44.486864 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Sep 4 17:17:44.486887 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:17:44.486897 kernel: SELinux: policy capability open_perms=1 Sep 4 17:17:44.486906 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:17:44.486914 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:17:44.486921 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:17:44.486930 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:17:44.486938 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:17:44.486945 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:17:44.486953 kernel: audit: type=1403 audit(1725470262.761:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:17:44.486963 systemd[1]: Successfully loaded SELinux policy in 85.887ms. Sep 4 17:17:44.486972 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.201ms. Sep 4 17:17:44.486982 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:17:44.486991 systemd[1]: Detected virtualization microsoft. Sep 4 17:17:44.487002 systemd[1]: Detected architecture arm64. Sep 4 17:17:44.487012 systemd[1]: Detected first boot. Sep 4 17:17:44.487021 systemd[1]: Hostname set to . Sep 4 17:17:44.487030 systemd[1]: Initializing machine ID from random generator. Sep 4 17:17:44.487039 zram_generator::config[1182]: No configuration found. Sep 4 17:17:44.487048 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:17:44.487057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:17:44.487067 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:17:44.487091 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:17:44.487102 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:17:44.487111 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:17:44.487120 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:17:44.487129 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:17:44.487138 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:17:44.487149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:17:44.487158 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:17:44.487167 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:17:44.487175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:17:44.487185 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:17:44.487193 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:17:44.487202 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:17:44.487211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:17:44.487223 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:17:44.487232 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 4 17:17:44.487241 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:17:44.487250 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:17:44.487261 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:17:44.487270 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:17:44.487279 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:17:44.487288 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:17:44.487299 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:17:44.487308 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:17:44.487317 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:17:44.487326 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:17:44.487335 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:17:44.487344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:17:44.487354 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:17:44.487365 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:17:44.487374 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:17:44.487384 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:17:44.487393 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:17:44.487402 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:17:44.487412 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:17:44.487423 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:17:44.487433 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:17:44.487442 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:17:44.487452 systemd[1]: Reached target machines.target - Containers. Sep 4 17:17:44.487461 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:17:44.487471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:17:44.487480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:17:44.487489 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:17:44.487500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:17:44.487510 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:17:44.487519 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:17:44.487528 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:17:44.487537 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:17:44.487547 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:17:44.487557 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:17:44.487566 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:17:44.487577 kernel: fuse: init (API version 7.39) Sep 4 17:17:44.487586 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:17:44.487595 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:17:44.487604 kernel: loop: module loaded Sep 4 17:17:44.487612 kernel: ACPI: bus type drm_connector registered Sep 4 17:17:44.487622 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:17:44.487631 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:17:44.487641 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:17:44.487662 systemd-journald[1277]: Collecting audit messages is disabled. Sep 4 17:17:44.487683 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:17:44.487693 systemd-journald[1277]: Journal started Sep 4 17:17:44.487714 systemd-journald[1277]: Runtime Journal (/run/log/journal/421f2015af604dbbabf310e2bcede6df) is 8.0M, max 78.6M, 70.6M free. Sep 4 17:17:43.644518 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:17:43.698359 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Sep 4 17:17:43.698716 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:17:43.699019 systemd[1]: systemd-journald.service: Consumed 3.040s CPU time. Sep 4 17:17:44.513040 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:17:44.522938 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:17:44.522982 systemd[1]: Stopped verity-setup.service. Sep 4 17:17:44.540096 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:17:44.539727 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:17:44.545350 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:17:44.551405 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:17:44.556589 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:17:44.562501 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:17:44.568745 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:17:44.575111 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:17:44.581799 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:17:44.588702 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:17:44.588838 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:17:44.595434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:17:44.595566 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:17:44.601857 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:17:44.601970 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:17:44.608270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:17:44.608391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:17:44.615512 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:17:44.615649 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:17:44.621731 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:17:44.621850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:17:44.628188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:17:44.634587 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:17:44.641672 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:17:44.650340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:17:44.665212 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:17:44.678152 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:17:44.685042 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:17:44.691406 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:17:44.691441 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:17:44.697890 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:17:44.705699 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:17:44.712794 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:17:44.718254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:17:44.731238 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:17:44.740268 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:17:44.748936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:17:44.749804 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:17:44.757308 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:17:44.760249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:17:44.765144 systemd-journald[1277]: Time spent on flushing to /var/log/journal/421f2015af604dbbabf310e2bcede6df is 98.675ms for 894 entries. Sep 4 17:17:44.765144 systemd-journald[1277]: System Journal (/var/log/journal/421f2015af604dbbabf310e2bcede6df) is 11.8M, max 2.6G, 2.6G free. Sep 4 17:17:44.900590 systemd-journald[1277]: Received client request to flush runtime journal. Sep 4 17:17:44.900628 systemd-journald[1277]: /var/log/journal/421f2015af604dbbabf310e2bcede6df/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Sep 4 17:17:44.900647 systemd-journald[1277]: Rotating system journal. Sep 4 17:17:44.900669 kernel: loop0: detected capacity change from 0 to 56592 Sep 4 17:17:44.900682 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:17:44.782240 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:17:44.810843 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:17:44.826441 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:17:44.838504 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:17:44.845033 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:17:44.853961 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:17:44.865694 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:17:44.880221 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:17:44.893388 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:17:44.901803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:17:44.908617 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:17:44.918732 udevadm[1319]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:17:44.933476 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:17:44.945276 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:17:44.981736 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 4 17:17:44.981754 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 4 17:17:44.986256 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:17:44.994918 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:17:44.996156 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:17:45.019104 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:17:45.056101 kernel: loop1: detected capacity change from 0 to 113672 Sep 4 17:17:45.158100 kernel: loop2: detected capacity change from 0 to 59688 Sep 4 17:17:45.265107 kernel: loop3: detected capacity change from 0 to 194096 Sep 4 17:17:45.336103 kernel: loop4: detected capacity change from 0 to 56592 Sep 4 17:17:45.344095 kernel: loop5: detected capacity change from 0 to 113672 Sep 4 17:17:45.352105 kernel: loop6: detected capacity change from 0 to 59688 Sep 4 17:17:45.360326 kernel: loop7: detected capacity change from 0 to 194096 Sep 4 17:17:45.364117 (sd-merge)[1341]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Sep 4 17:17:45.364877 (sd-merge)[1341]: Merged extensions into '/usr'. Sep 4 17:17:45.370122 systemd[1]: Reloading requested from client PID 1315 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:17:45.370229 systemd[1]: Reloading... Sep 4 17:17:45.466649 zram_generator::config[1363]: No configuration found. Sep 4 17:17:45.600951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:45.662689 systemd[1]: Reloading finished in 291 ms. Sep 4 17:17:45.691610 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:17:45.698604 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:17:45.713253 systemd[1]: Starting ensure-sysext.service... Sep 4 17:17:45.719045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:17:45.728349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:17:45.750858 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:17:45.751131 systemd[1]: Reloading requested from client PID 1421 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:17:45.751152 systemd[1]: Reloading... Sep 4 17:17:45.751209 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:17:45.751921 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:17:45.752691 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Sep 4 17:17:45.752810 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Sep 4 17:17:45.757028 systemd-tmpfiles[1422]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:17:45.757161 systemd-tmpfiles[1422]: Skipping /boot Sep 4 17:17:45.764700 systemd-tmpfiles[1422]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:17:45.764801 systemd-tmpfiles[1422]: Skipping /boot Sep 4 17:17:45.781330 systemd-udevd[1423]: Using default interface naming scheme 'v255'. Sep 4 17:17:45.825103 zram_generator::config[1448]: No configuration found. Sep 4 17:17:45.944096 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1474) Sep 4 17:17:45.957227 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:45.992880 kernel: mousedev: PS/2 mouse device common for all mice Sep 4 17:17:46.048221 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 4 17:17:46.048352 systemd[1]: Reloading finished in 296 ms. Sep 4 17:17:46.071780 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:17:46.095756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:17:46.125267 kernel: hv_vmbus: registering driver hyperv_fb Sep 4 17:17:46.125352 kernel: hv_vmbus: registering driver hv_balloon Sep 4 17:17:46.125381 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Sep 4 17:17:46.125399 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Sep 4 17:17:46.134288 kernel: hv_balloon: Memory hot add disabled on ARM64 Sep 4 17:17:46.142103 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Sep 4 17:17:46.146380 kernel: Console: switching to colour dummy device 80x25 Sep 4 17:17:46.149097 kernel: Console: switching to colour frame buffer device 128x48 Sep 4 17:17:46.165045 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:17:46.177195 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1482) Sep 4 17:17:46.181433 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:17:46.190395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:17:46.204842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:17:46.224953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:17:46.239167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:17:46.247839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:17:46.260421 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:17:46.279604 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:17:46.287235 augenrules[1600]: No rules Sep 4 17:17:46.293194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:17:46.305372 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:17:46.322753 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:17:46.331888 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:17:46.339521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:17:46.339752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:17:46.346702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:17:46.346915 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:17:46.355288 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:17:46.355528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:17:46.361911 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:17:46.389502 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:17:46.403130 systemd[1]: Finished ensure-sysext.service. Sep 4 17:17:46.423438 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Sep 4 17:17:46.430786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:17:46.443427 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:17:46.450139 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:17:46.458339 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:17:46.467320 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:17:46.472675 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:17:46.478753 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:17:46.488516 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:17:46.502384 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:17:46.510342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:17:46.520407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:17:46.522065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:17:46.523302 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:17:46.531363 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:17:46.532031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:17:46.540889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:17:46.541021 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:17:46.551614 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:17:46.568297 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:17:46.568438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:17:46.578071 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:17:46.589671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:17:46.589747 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:17:46.599468 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:17:46.620263 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:17:46.647205 ldconfig[1310]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:17:46.651416 lvm[1638]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:17:46.662396 systemd-networkd[1599]: lo: Link UP Sep 4 17:17:46.662403 systemd-networkd[1599]: lo: Gained carrier Sep 4 17:17:46.667011 systemd-networkd[1599]: Enumeration completed Sep 4 17:17:46.667105 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:17:46.669195 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:46.669200 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:17:46.674462 systemd-resolved[1605]: Positive Trust Anchors: Sep 4 17:17:46.674479 systemd-resolved[1605]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:17:46.674510 systemd-resolved[1605]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:17:46.676165 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:17:46.679805 systemd-resolved[1605]: Using system hostname 'ci-3975.2.1-a-bdc284204f'. Sep 4 17:17:46.683420 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:17:46.689681 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:17:46.698564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:17:46.708211 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:17:46.717246 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:17:46.719223 lvm[1645]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:17:46.732598 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:17:46.743494 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:17:46.751046 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:17:46.761093 kernel: mlx5_core 5c6c:00:02.0 enP23660s1: Link up Sep 4 17:17:46.786111 kernel: hv_netvsc 000d3afe-5aaa-000d-3afe-5aaa000d3afe eth0: Data path switched to VF: enP23660s1 Sep 4 17:17:46.787110 systemd-networkd[1599]: enP23660s1: Link UP Sep 4 17:17:46.787233 systemd-networkd[1599]: eth0: Link UP Sep 4 17:17:46.787237 systemd-networkd[1599]: eth0: Gained carrier Sep 4 17:17:46.787252 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:46.787598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:17:46.793681 systemd[1]: Reached target network.target - Network. Sep 4 17:17:46.798482 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:17:46.804740 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:17:46.805372 systemd-networkd[1599]: enP23660s1: Gained carrier Sep 4 17:17:46.810669 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:17:46.817704 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:17:46.824614 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:17:46.830364 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:17:46.837017 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:17:46.843805 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:17:46.843839 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:17:46.844127 systemd-networkd[1599]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 17:17:46.849282 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:17:46.860136 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:17:46.867556 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:17:46.876911 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:17:46.882969 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:17:46.888893 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:17:46.894142 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:17:46.899171 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:17:46.899283 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:17:46.908205 systemd[1]: Starting chronyd.service - NTP client/server... Sep 4 17:17:46.917198 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:17:46.928242 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:17:46.934967 (chronyd)[1652]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Sep 4 17:17:46.945143 chronyd[1656]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Sep 4 17:17:46.946546 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:17:46.953357 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:17:46.959370 chronyd[1656]: Timezone right/UTC failed leap second check, ignoring Sep 4 17:17:46.959570 chronyd[1656]: Loaded seccomp filter (level 2) Sep 4 17:17:46.964643 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:17:46.972121 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:17:46.974678 jq[1660]: false Sep 4 17:17:46.979474 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:17:46.986906 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:17:46.999215 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:17:47.005734 extend-filesystems[1661]: Found loop4 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found loop5 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found loop6 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found loop7 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda1 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda2 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda3 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found usr Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda4 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda6 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda7 Sep 4 17:17:47.005734 extend-filesystems[1661]: Found sda9 Sep 4 17:17:47.005734 extend-filesystems[1661]: Checking size of /dev/sda9 Sep 4 17:17:47.208308 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1473) Sep 4 17:17:47.208341 coreos-metadata[1654]: Sep 04 17:17:47.177 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Sep 4 17:17:47.208341 coreos-metadata[1654]: Sep 04 17:17:47.191 INFO Fetch successful Sep 4 17:17:47.208341 coreos-metadata[1654]: Sep 04 17:17:47.191 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Sep 4 17:17:47.208341 coreos-metadata[1654]: Sep 04 17:17:47.198 INFO Fetch successful Sep 4 17:17:47.208341 coreos-metadata[1654]: Sep 04 17:17:47.202 INFO Fetching http://168.63.129.16/machine/fe176645-38de-462a-ba2b-fa7345d8eaca/caf0db08%2Dfc9f%2D4a93%2Dbc68%2D4304e44dad27.%5Fci%2D3975.2.1%2Da%2Dbdc284204f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Sep 4 17:17:47.208341 coreos-metadata[1654]: Sep 04 17:17:47.206 INFO Fetch successful Sep 4 17:17:47.208341 coreos-metadata[1654]: Sep 04 17:17:47.206 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Sep 4 17:17:47.014700 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:17:47.222960 extend-filesystems[1661]: Old size kept for /dev/sda9 Sep 4 17:17:47.222960 extend-filesystems[1661]: Found sr0 Sep 4 17:17:47.080031 dbus-daemon[1657]: [system] SELinux support is enabled Sep 4 17:17:47.269677 coreos-metadata[1654]: Sep 04 17:17:47.219 INFO Fetch successful Sep 4 17:17:47.027266 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:17:47.036532 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:17:47.041824 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:17:47.062138 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:17:47.270190 update_engine[1679]: I0904 17:17:47.130738 1679 main.cc:92] Flatcar Update Engine starting Sep 4 17:17:47.270190 update_engine[1679]: I0904 17:17:47.140000 1679 update_check_scheduler.cc:74] Next update check in 2m9s Sep 4 17:17:47.086217 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:17:47.270467 jq[1684]: true Sep 4 17:17:47.106027 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:17:47.125884 systemd[1]: Started chronyd.service - NTP client/server. Sep 4 17:17:47.150575 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:17:47.150781 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:17:47.151063 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:17:47.270995 jq[1712]: true Sep 4 17:17:47.153912 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:17:47.169549 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:17:47.169743 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:17:47.211592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:17:47.211807 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:17:47.224994 systemd-logind[1676]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 4 17:17:47.238690 systemd-logind[1676]: New seat seat0. Sep 4 17:17:47.241275 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:17:47.251351 (ntainerd)[1713]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:17:47.299860 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:17:47.313632 dbus-daemon[1657]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:17:47.315494 tar[1701]: linux-arm64/helm Sep 4 17:17:47.328319 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:17:47.340221 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:17:47.340415 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:17:47.340542 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:17:47.351865 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:17:47.351981 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:17:47.370717 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:17:47.461440 bash[1751]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:17:47.465136 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:17:47.475644 locksmithd[1745]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:17:47.477285 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 4 17:17:47.549195 containerd[1713]: time="2024-09-04T17:17:47.548065040Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:17:47.611372 containerd[1713]: time="2024-09-04T17:17:47.609633320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:17:47.613737 containerd[1713]: time="2024-09-04T17:17:47.613706760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:47.615448 containerd[1713]: time="2024-09-04T17:17:47.615409880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:47.615817 containerd[1713]: time="2024-09-04T17:17:47.615799320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:47.616165 containerd[1713]: time="2024-09-04T17:17:47.616136560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:47.617095 containerd[1713]: time="2024-09-04T17:17:47.617054400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:17:47.617280 containerd[1713]: time="2024-09-04T17:17:47.617259200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:47.618195 containerd[1713]: time="2024-09-04T17:17:47.618170840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:47.618268 containerd[1713]: time="2024-09-04T17:17:47.618254320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:47.618407 containerd[1713]: time="2024-09-04T17:17:47.618387000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:47.618788 containerd[1713]: time="2024-09-04T17:17:47.618760240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:47.619696 containerd[1713]: time="2024-09-04T17:17:47.619669280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:17:47.619830 containerd[1713]: time="2024-09-04T17:17:47.619811960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:17:47.620177 containerd[1713]: time="2024-09-04T17:17:47.620153120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:17:47.620607 containerd[1713]: time="2024-09-04T17:17:47.620587000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:17:47.620737 containerd[1713]: time="2024-09-04T17:17:47.620717680Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:17:47.621130 containerd[1713]: time="2024-09-04T17:17:47.621110640Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:17:47.646946 containerd[1713]: time="2024-09-04T17:17:47.646906960Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:17:47.647240 containerd[1713]: time="2024-09-04T17:17:47.647069320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:17:47.647321 containerd[1713]: time="2024-09-04T17:17:47.647306160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:17:47.647439 containerd[1713]: time="2024-09-04T17:17:47.647423400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:17:47.647528 containerd[1713]: time="2024-09-04T17:17:47.647514480Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:17:47.647645 containerd[1713]: time="2024-09-04T17:17:47.647630200Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:17:47.647945 containerd[1713]: time="2024-09-04T17:17:47.647927640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:17:47.648505 containerd[1713]: time="2024-09-04T17:17:47.648483000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:17:47.648612 containerd[1713]: time="2024-09-04T17:17:47.648596920Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:17:47.648672 containerd[1713]: time="2024-09-04T17:17:47.648660480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:17:47.648738 containerd[1713]: time="2024-09-04T17:17:47.648713240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:17:47.648816 containerd[1713]: time="2024-09-04T17:17:47.648787120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.648954 containerd[1713]: time="2024-09-04T17:17:47.648937280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.649051 containerd[1713]: time="2024-09-04T17:17:47.649035840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.649494 containerd[1713]: time="2024-09-04T17:17:47.649345120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.649494 containerd[1713]: time="2024-09-04T17:17:47.649369000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.649494 containerd[1713]: time="2024-09-04T17:17:47.649383840Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.649494 containerd[1713]: time="2024-09-04T17:17:47.649396640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.649494 containerd[1713]: time="2024-09-04T17:17:47.649424280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:17:47.650980 containerd[1713]: time="2024-09-04T17:17:47.649775040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:17:47.650980 containerd[1713]: time="2024-09-04T17:17:47.650033200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:17:47.650980 containerd[1713]: time="2024-09-04T17:17:47.650060720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.650980 containerd[1713]: time="2024-09-04T17:17:47.650093600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:17:47.650980 containerd[1713]: time="2024-09-04T17:17:47.650119680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:17:47.652405 containerd[1713]: time="2024-09-04T17:17:47.652369280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.652658 containerd[1713]: time="2024-09-04T17:17:47.652628560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.653017 containerd[1713]: time="2024-09-04T17:17:47.652993640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.653185 containerd[1713]: time="2024-09-04T17:17:47.653127040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.653318 containerd[1713]: time="2024-09-04T17:17:47.653295280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.653675 containerd[1713]: time="2024-09-04T17:17:47.653652680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655266 containerd[1713]: time="2024-09-04T17:17:47.655218880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655266 containerd[1713]: time="2024-09-04T17:17:47.655259600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655365 containerd[1713]: time="2024-09-04T17:17:47.655281040Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655424240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655453560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655474760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655490960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655510080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655528040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655544320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.655871 containerd[1713]: time="2024-09-04T17:17:47.655559800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:17:47.656042 containerd[1713]: time="2024-09-04T17:17:47.655827920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:17:47.656042 containerd[1713]: time="2024-09-04T17:17:47.655889160Z" level=info msg="Connect containerd service" Sep 4 17:17:47.656042 containerd[1713]: time="2024-09-04T17:17:47.655923880Z" level=info msg="using legacy CRI server" Sep 4 17:17:47.656042 containerd[1713]: time="2024-09-04T17:17:47.655935120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:17:47.656042 containerd[1713]: time="2024-09-04T17:17:47.656023600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:17:47.659330 containerd[1713]: time="2024-09-04T17:17:47.659106680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:17:47.659330 containerd[1713]: time="2024-09-04T17:17:47.659252840Z" level=info msg="Start subscribing containerd event" Sep 4 17:17:47.659330 containerd[1713]: time="2024-09-04T17:17:47.659311200Z" level=info msg="Start recovering state" Sep 4 17:17:47.659453 containerd[1713]: time="2024-09-04T17:17:47.659388280Z" level=info msg="Start event monitor" Sep 4 17:17:47.659453 containerd[1713]: time="2024-09-04T17:17:47.659399720Z" level=info msg="Start snapshots syncer" Sep 4 17:17:47.659453 containerd[1713]: time="2024-09-04T17:17:47.659409720Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:17:47.659453 containerd[1713]: time="2024-09-04T17:17:47.659417040Z" level=info msg="Start streaming server" Sep 4 17:17:47.659879 containerd[1713]: time="2024-09-04T17:17:47.659260920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:17:47.659879 containerd[1713]: time="2024-09-04T17:17:47.659586040Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:17:47.660103 containerd[1713]: time="2024-09-04T17:17:47.659961840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:17:47.660103 containerd[1713]: time="2024-09-04T17:17:47.659986840Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:17:47.667538 containerd[1713]: time="2024-09-04T17:17:47.661115440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:17:47.667538 containerd[1713]: time="2024-09-04T17:17:47.661166480Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:17:47.667538 containerd[1713]: time="2024-09-04T17:17:47.667506080Z" level=info msg="containerd successfully booted in 0.131189s" Sep 4 17:17:47.661320 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:17:47.796155 sshd_keygen[1678]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:17:47.816109 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:17:47.817805 tar[1701]: linux-arm64/LICENSE Sep 4 17:17:47.817805 tar[1701]: linux-arm64/README.md Sep 4 17:17:47.830355 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:17:47.835713 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:17:47.842859 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:17:47.843026 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:17:47.861329 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:17:47.871143 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:17:47.879025 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:17:47.885843 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 4 17:17:47.892115 systemd-networkd[1599]: enP23660s1: Gained IPv6LL Sep 4 17:17:47.892456 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:17:48.274255 systemd-networkd[1599]: eth0: Gained IPv6LL Sep 4 17:17:48.276793 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:17:48.284115 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:17:48.297244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:48.304294 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:17:48.311150 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Sep 4 17:17:48.340385 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Sep 4 17:17:48.355138 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:17:48.936768 waagent[1793]: 2024-09-04T17:17:48.936050Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Sep 4 17:17:48.943872 waagent[1793]: 2024-09-04T17:17:48.943786Z INFO Daemon Daemon OS: flatcar 3975.2.1 Sep 4 17:17:48.952452 waagent[1793]: 2024-09-04T17:17:48.948587Z INFO Daemon Daemon Python: 3.11.9 Sep 4 17:17:48.952947 waagent[1793]: 2024-09-04T17:17:48.952879Z INFO Daemon Daemon Run daemon Sep 4 17:17:48.958244 waagent[1793]: 2024-09-04T17:17:48.957034Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.2.1' Sep 4 17:17:48.966712 waagent[1793]: 2024-09-04T17:17:48.966459Z INFO Daemon Daemon Using waagent for provisioning Sep 4 17:17:48.973198 waagent[1793]: 2024-09-04T17:17:48.972273Z INFO Daemon Daemon Activate resource disk Sep 4 17:17:48.977024 waagent[1793]: 2024-09-04T17:17:48.976963Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Sep 4 17:17:48.982295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:48.993690 waagent[1793]: 2024-09-04T17:17:48.990375Z INFO Daemon Daemon Found device: None Sep 4 17:17:48.994727 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:17:49.004366 waagent[1793]: 2024-09-04T17:17:48.998936Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Sep 4 17:17:49.004731 (kubelet)[1806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:49.006533 systemd[1]: Startup finished in 640ms (kernel) + 8.321s (initrd) + 6.329s (userspace) = 15.291s. Sep 4 17:17:49.015148 waagent[1793]: 2024-09-04T17:17:49.014985Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Sep 4 17:17:49.030262 waagent[1793]: 2024-09-04T17:17:49.030197Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:17:49.041469 waagent[1793]: 2024-09-04T17:17:49.041404Z INFO Daemon Daemon Running default provisioning handler Sep 4 17:17:49.061173 waagent[1793]: 2024-09-04T17:17:49.061056Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Sep 4 17:17:49.096262 waagent[1793]: 2024-09-04T17:17:49.096181Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Sep 4 17:17:49.100501 login[1779]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:17:49.106665 waagent[1793]: 2024-09-04T17:17:49.106386Z INFO Daemon Daemon cloud-init is enabled: False Sep 4 17:17:49.117239 waagent[1793]: 2024-09-04T17:17:49.117148Z INFO Daemon Daemon Copying ovf-env.xml Sep 4 17:17:49.121419 login[1780]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 4 17:17:49.123191 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:17:49.133394 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:17:49.138834 systemd-logind[1676]: New session 1 of user core. Sep 4 17:17:49.145008 systemd-logind[1676]: New session 2 of user core. Sep 4 17:17:49.153138 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:17:49.162484 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:17:49.169324 (systemd)[1818]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:17:49.175893 waagent[1793]: 2024-09-04T17:17:49.174779Z INFO Daemon Daemon Successfully mounted dvd Sep 4 17:17:49.202159 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Sep 4 17:17:49.206171 waagent[1793]: 2024-09-04T17:17:49.205416Z INFO Daemon Daemon Detect protocol endpoint Sep 4 17:17:49.210589 waagent[1793]: 2024-09-04T17:17:49.210519Z INFO Daemon Daemon Clean protocol and wireserver endpoint Sep 4 17:17:49.216321 waagent[1793]: 2024-09-04T17:17:49.216256Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Sep 4 17:17:49.222705 waagent[1793]: 2024-09-04T17:17:49.222649Z INFO Daemon Daemon Test for route to 168.63.129.16 Sep 4 17:17:49.227873 waagent[1793]: 2024-09-04T17:17:49.227819Z INFO Daemon Daemon Route to 168.63.129.16 exists Sep 4 17:17:49.232846 waagent[1793]: 2024-09-04T17:17:49.232782Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Sep 4 17:17:49.247257 waagent[1793]: 2024-09-04T17:17:49.246922Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Sep 4 17:17:49.254163 waagent[1793]: 2024-09-04T17:17:49.253501Z INFO Daemon Daemon Wire protocol version:2012-11-30 Sep 4 17:17:49.258729 waagent[1793]: 2024-09-04T17:17:49.258606Z INFO Daemon Daemon Server preferred version:2015-04-05 Sep 4 17:17:49.344971 systemd[1818]: Queued start job for default target default.target. Sep 4 17:17:49.353365 systemd[1818]: Created slice app.slice - User Application Slice. Sep 4 17:17:49.353559 systemd[1818]: Reached target paths.target - Paths. Sep 4 17:17:49.353575 systemd[1818]: Reached target timers.target - Timers. Sep 4 17:17:49.355901 systemd[1818]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:17:49.381978 systemd[1818]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:17:49.382918 systemd[1818]: Reached target sockets.target - Sockets. Sep 4 17:17:49.384423 systemd[1818]: Reached target basic.target - Basic System. Sep 4 17:17:49.384500 systemd[1818]: Reached target default.target - Main User Target. Sep 4 17:17:49.384529 systemd[1818]: Startup finished in 206ms. Sep 4 17:17:49.384637 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:17:49.391417 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:17:49.393156 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:17:49.460961 waagent[1793]: 2024-09-04T17:17:49.460806Z INFO Daemon Daemon Initializing goal state during protocol detection Sep 4 17:17:49.467641 waagent[1793]: 2024-09-04T17:17:49.467567Z INFO Daemon Daemon Forcing an update of the goal state. Sep 4 17:17:49.476903 waagent[1793]: 2024-09-04T17:17:49.476841Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:17:49.496518 waagent[1793]: 2024-09-04T17:17:49.496465Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.154 Sep 4 17:17:49.502731 waagent[1793]: 2024-09-04T17:17:49.502679Z INFO Daemon Sep 4 17:17:49.505834 waagent[1793]: 2024-09-04T17:17:49.505786Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: cf924202-f135-4aa2-84da-fddbbfdb154d eTag: 13778214716590133223 source: Fabric] Sep 4 17:17:49.517499 waagent[1793]: 2024-09-04T17:17:49.517366Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Sep 4 17:17:49.524889 waagent[1793]: 2024-09-04T17:17:49.524537Z INFO Daemon Sep 4 17:17:49.527686 waagent[1793]: 2024-09-04T17:17:49.527634Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:17:49.539042 waagent[1793]: 2024-09-04T17:17:49.538996Z INFO Daemon Daemon Downloading artifacts profile blob Sep 4 17:17:49.642309 waagent[1793]: 2024-09-04T17:17:49.642217Z INFO Daemon Downloaded certificate {'thumbprint': '75031E95E4139642DEB6ADF231C7F069FE9688CF', 'hasPrivateKey': False} Sep 4 17:17:49.652605 waagent[1793]: 2024-09-04T17:17:49.652542Z INFO Daemon Downloaded certificate {'thumbprint': '736F159F376B473A12BBD5BB5EEAD2BDC2621E7C', 'hasPrivateKey': True} Sep 4 17:17:49.662705 waagent[1793]: 2024-09-04T17:17:49.662498Z INFO Daemon Fetch goal state completed Sep 4 17:17:49.674746 waagent[1793]: 2024-09-04T17:17:49.674653Z INFO Daemon Daemon Starting provisioning Sep 4 17:17:49.680026 waagent[1793]: 2024-09-04T17:17:49.679725Z INFO Daemon Daemon Handle ovf-env.xml. Sep 4 17:17:49.684829 waagent[1793]: 2024-09-04T17:17:49.684763Z INFO Daemon Daemon Set hostname [ci-3975.2.1-a-bdc284204f] Sep 4 17:17:49.695139 kubelet[1806]: E0904 17:17:49.695098 1806 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:49.698159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:49.698322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:49.701611 waagent[1793]: 2024-09-04T17:17:49.700477Z INFO Daemon Daemon Publish hostname [ci-3975.2.1-a-bdc284204f] Sep 4 17:17:49.708081 waagent[1793]: 2024-09-04T17:17:49.706907Z INFO Daemon Daemon Examine /proc/net/route for primary interface Sep 4 17:17:49.713534 waagent[1793]: 2024-09-04T17:17:49.713429Z INFO Daemon Daemon Primary interface is [eth0] Sep 4 17:17:49.731696 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:17:49.731705 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:17:49.731749 systemd-networkd[1599]: eth0: DHCP lease lost Sep 4 17:17:49.733119 waagent[1793]: 2024-09-04T17:17:49.733008Z INFO Daemon Daemon Create user account if not exists Sep 4 17:17:49.738739 waagent[1793]: 2024-09-04T17:17:49.738668Z INFO Daemon Daemon User core already exists, skip useradd Sep 4 17:17:49.744624 waagent[1793]: 2024-09-04T17:17:49.744557Z INFO Daemon Daemon Configure sudoer Sep 4 17:17:49.745176 systemd-networkd[1599]: eth0: DHCPv6 lease lost Sep 4 17:17:49.749437 waagent[1793]: 2024-09-04T17:17:49.749352Z INFO Daemon Daemon Configure sshd Sep 4 17:17:49.753992 waagent[1793]: 2024-09-04T17:17:49.753889Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Sep 4 17:17:49.766916 waagent[1793]: 2024-09-04T17:17:49.766840Z INFO Daemon Daemon Deploy ssh public key. Sep 4 17:17:49.785198 systemd-networkd[1599]: eth0: DHCPv4 address 10.200.20.21/24, gateway 10.200.20.1 acquired from 168.63.129.16 Sep 4 17:17:50.957369 waagent[1793]: 2024-09-04T17:17:50.957320Z INFO Daemon Daemon Provisioning complete Sep 4 17:17:50.975890 waagent[1793]: 2024-09-04T17:17:50.975738Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Sep 4 17:17:50.981880 waagent[1793]: 2024-09-04T17:17:50.981824Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Sep 4 17:17:50.991168 waagent[1793]: 2024-09-04T17:17:50.991116Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Sep 4 17:17:51.125603 waagent[1868]: 2024-09-04T17:17:51.125109Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Sep 4 17:17:51.125603 waagent[1868]: 2024-09-04T17:17:51.125253Z INFO ExtHandler ExtHandler OS: flatcar 3975.2.1 Sep 4 17:17:51.125603 waagent[1868]: 2024-09-04T17:17:51.125304Z INFO ExtHandler ExtHandler Python: 3.11.9 Sep 4 17:17:51.139519 waagent[1868]: 2024-09-04T17:17:51.139447Z INFO ExtHandler ExtHandler Distro: flatcar-3975.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Sep 4 17:17:51.139825 waagent[1868]: 2024-09-04T17:17:51.139788Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:17:51.139963 waagent[1868]: 2024-09-04T17:17:51.139932Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:17:51.147789 waagent[1868]: 2024-09-04T17:17:51.147724Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Sep 4 17:17:51.153734 waagent[1868]: 2024-09-04T17:17:51.153689Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.154 Sep 4 17:17:51.154373 waagent[1868]: 2024-09-04T17:17:51.154329Z INFO ExtHandler Sep 4 17:17:51.155099 waagent[1868]: 2024-09-04T17:17:51.154499Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e86ea14f-ad29-447e-bb76-1f0920ce6cf7 eTag: 13778214716590133223 source: Fabric] Sep 4 17:17:51.155099 waagent[1868]: 2024-09-04T17:17:51.154786Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Sep 4 17:17:51.155400 waagent[1868]: 2024-09-04T17:17:51.155348Z INFO ExtHandler Sep 4 17:17:51.155460 waagent[1868]: 2024-09-04T17:17:51.155432Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Sep 4 17:17:51.159300 waagent[1868]: 2024-09-04T17:17:51.159266Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Sep 4 17:17:51.243855 waagent[1868]: 2024-09-04T17:17:51.243700Z INFO ExtHandler Downloaded certificate {'thumbprint': '75031E95E4139642DEB6ADF231C7F069FE9688CF', 'hasPrivateKey': False} Sep 4 17:17:51.244295 waagent[1868]: 2024-09-04T17:17:51.244248Z INFO ExtHandler Downloaded certificate {'thumbprint': '736F159F376B473A12BBD5BB5EEAD2BDC2621E7C', 'hasPrivateKey': True} Sep 4 17:17:51.244767 waagent[1868]: 2024-09-04T17:17:51.244724Z INFO ExtHandler Fetch goal state completed Sep 4 17:17:51.262290 waagent[1868]: 2024-09-04T17:17:51.262231Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1868 Sep 4 17:17:51.262435 waagent[1868]: 2024-09-04T17:17:51.262401Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Sep 4 17:17:51.264025 waagent[1868]: 2024-09-04T17:17:51.263979Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.2.1', '', 'Flatcar Container Linux by Kinvolk'] Sep 4 17:17:51.264429 waagent[1868]: 2024-09-04T17:17:51.264390Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Sep 4 17:17:51.270695 waagent[1868]: 2024-09-04T17:17:51.270650Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Sep 4 17:17:51.270877 waagent[1868]: 2024-09-04T17:17:51.270838Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Sep 4 17:17:51.277720 waagent[1868]: 2024-09-04T17:17:51.277256Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Sep 4 17:17:51.283810 systemd[1]: Reloading requested from client PID 1883 ('systemctl') (unit waagent.service)... Sep 4 17:17:51.283823 systemd[1]: Reloading... Sep 4 17:17:51.361112 zram_generator::config[1917]: No configuration found. Sep 4 17:17:51.459347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:51.542278 systemd[1]: Reloading finished in 258 ms. Sep 4 17:17:51.567096 waagent[1868]: 2024-09-04T17:17:51.562293Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Sep 4 17:17:51.568929 systemd[1]: Reloading requested from client PID 1968 ('systemctl') (unit waagent.service)... Sep 4 17:17:51.569067 systemd[1]: Reloading... Sep 4 17:17:51.639154 zram_generator::config[1999]: No configuration found. Sep 4 17:17:51.747678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:51.826916 systemd[1]: Reloading finished in 257 ms. Sep 4 17:17:51.855657 waagent[1868]: 2024-09-04T17:17:51.855558Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Sep 4 17:17:51.855782 waagent[1868]: 2024-09-04T17:17:51.855744Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Sep 4 17:17:51.983151 waagent[1868]: 2024-09-04T17:17:51.983043Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Sep 4 17:17:51.983724 waagent[1868]: 2024-09-04T17:17:51.983676Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Sep 4 17:17:51.984539 waagent[1868]: 2024-09-04T17:17:51.984454Z INFO ExtHandler ExtHandler Starting env monitor service. Sep 4 17:17:51.984918 waagent[1868]: 2024-09-04T17:17:51.984833Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Sep 4 17:17:51.985391 waagent[1868]: 2024-09-04T17:17:51.985298Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Sep 4 17:17:51.985584 waagent[1868]: 2024-09-04T17:17:51.985392Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Sep 4 17:17:51.985584 waagent[1868]: 2024-09-04T17:17:51.985458Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:17:51.986635 waagent[1868]: 2024-09-04T17:17:51.985771Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Sep 4 17:17:51.986635 waagent[1868]: 2024-09-04T17:17:51.985864Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:17:51.986635 waagent[1868]: 2024-09-04T17:17:51.985999Z INFO EnvHandler ExtHandler Configure routes Sep 4 17:17:51.986635 waagent[1868]: 2024-09-04T17:17:51.986053Z INFO EnvHandler ExtHandler Gateway:None Sep 4 17:17:51.986635 waagent[1868]: 2024-09-04T17:17:51.986135Z INFO EnvHandler ExtHandler Routes:None Sep 4 17:17:51.986931 waagent[1868]: 2024-09-04T17:17:51.986883Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Sep 4 17:17:51.987275 waagent[1868]: 2024-09-04T17:17:51.987233Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Sep 4 17:17:51.987527 waagent[1868]: 2024-09-04T17:17:51.987490Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Sep 4 17:17:51.987527 waagent[1868]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Sep 4 17:17:51.987527 waagent[1868]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Sep 4 17:17:51.987527 waagent[1868]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Sep 4 17:17:51.987527 waagent[1868]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:17:51.987527 waagent[1868]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:17:51.987527 waagent[1868]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Sep 4 17:17:51.987825 waagent[1868]: 2024-09-04T17:17:51.987779Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Sep 4 17:17:51.988012 waagent[1868]: 2024-09-04T17:17:51.987978Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Sep 4 17:17:51.990241 waagent[1868]: 2024-09-04T17:17:51.990179Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Sep 4 17:17:52.001992 waagent[1868]: 2024-09-04T17:17:52.001944Z INFO ExtHandler ExtHandler Sep 4 17:17:52.002237 waagent[1868]: 2024-09-04T17:17:52.002195Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 663e5949-316d-4444-8420-7c9f65ad75b7 correlation 7fa7f116-eba0-49a6-9f3f-6a779bb0d8e4 created: 2024-09-04T17:17:12.180016Z] Sep 4 17:17:52.003057 waagent[1868]: 2024-09-04T17:17:52.003008Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Sep 4 17:17:52.004224 waagent[1868]: 2024-09-04T17:17:52.004175Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Sep 4 17:17:52.012006 waagent[1868]: 2024-09-04T17:17:52.011927Z INFO MonitorHandler ExtHandler Network interfaces: Sep 4 17:17:52.012006 waagent[1868]: Executing ['ip', '-a', '-o', 'link']: Sep 4 17:17:52.012006 waagent[1868]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Sep 4 17:17:52.012006 waagent[1868]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fe:5a:aa brd ff:ff:ff:ff:ff:ff Sep 4 17:17:52.012006 waagent[1868]: 3: enP23660s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:fe:5a:aa brd ff:ff:ff:ff:ff:ff\ altname enP23660p0s2 Sep 4 17:17:52.012006 waagent[1868]: Executing ['ip', '-4', '-a', '-o', 'address']: Sep 4 17:17:52.012006 waagent[1868]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Sep 4 17:17:52.012006 waagent[1868]: 2: eth0 inet 10.200.20.21/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Sep 4 17:17:52.012006 waagent[1868]: Executing ['ip', '-6', '-a', '-o', 'address']: Sep 4 17:17:52.012006 waagent[1868]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Sep 4 17:17:52.012006 waagent[1868]: 2: eth0 inet6 fe80::20d:3aff:fefe:5aaa/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:17:52.012006 waagent[1868]: 3: enP23660s1 inet6 fe80::20d:3aff:fefe:5aaa/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Sep 4 17:17:52.055726 waagent[1868]: 2024-09-04T17:17:52.055590Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 73C5BE85-7B9D-4C2A-934F-2A78ECF16865;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Sep 4 17:17:52.058337 waagent[1868]: 2024-09-04T17:17:52.058268Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Sep 4 17:17:52.058337 waagent[1868]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:17:52.058337 waagent[1868]: pkts bytes target prot opt in out source destination Sep 4 17:17:52.058337 waagent[1868]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:17:52.058337 waagent[1868]: pkts bytes target prot opt in out source destination Sep 4 17:17:52.058337 waagent[1868]: Chain OUTPUT (policy ACCEPT 3 packets, 348 bytes) Sep 4 17:17:52.058337 waagent[1868]: pkts bytes target prot opt in out source destination Sep 4 17:17:52.058337 waagent[1868]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:17:52.058337 waagent[1868]: 1 52 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:17:52.058337 waagent[1868]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:17:52.062263 waagent[1868]: 2024-09-04T17:17:52.062201Z INFO EnvHandler ExtHandler Current Firewall rules: Sep 4 17:17:52.062263 waagent[1868]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:17:52.062263 waagent[1868]: pkts bytes target prot opt in out source destination Sep 4 17:17:52.062263 waagent[1868]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Sep 4 17:17:52.062263 waagent[1868]: pkts bytes target prot opt in out source destination Sep 4 17:17:52.062263 waagent[1868]: Chain OUTPUT (policy ACCEPT 5 packets, 452 bytes) Sep 4 17:17:52.062263 waagent[1868]: pkts bytes target prot opt in out source destination Sep 4 17:17:52.062263 waagent[1868]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Sep 4 17:17:52.062263 waagent[1868]: 5 646 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Sep 4 17:17:52.062263 waagent[1868]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Sep 4 17:17:52.062530 waagent[1868]: 2024-09-04T17:17:52.062489Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Sep 4 17:17:59.949063 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:17:59.957265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:00.044785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:00.058435 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:18:00.629243 kubelet[2092]: E0904 17:18:00.629186 2092 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:18:00.631939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:18:00.632064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:18:10.760023 chronyd[1656]: Selected source PHC0 Sep 4 17:18:10.882734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:18:10.893265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:11.010814 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:11.015859 (kubelet)[2108]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:18:11.588699 kubelet[2108]: E0904 17:18:11.588627 2108 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:18:11.591351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:18:11.591613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:18:19.459135 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:18:19.467526 systemd[1]: Started sshd@0-10.200.20.21:22-10.200.16.10:55628.service - OpenSSH per-connection server daemon (10.200.16.10:55628). Sep 4 17:18:19.929442 sshd[2117]: Accepted publickey for core from 10.200.16.10 port 55628 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:18:19.930828 sshd[2117]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:19.935125 systemd-logind[1676]: New session 3 of user core. Sep 4 17:18:19.944285 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:18:20.332677 systemd[1]: Started sshd@1-10.200.20.21:22-10.200.16.10:55634.service - OpenSSH per-connection server daemon (10.200.16.10:55634). Sep 4 17:18:20.786848 sshd[2122]: Accepted publickey for core from 10.200.16.10 port 55634 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:18:20.792387 sshd[2122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:20.797336 systemd-logind[1676]: New session 4 of user core. Sep 4 17:18:20.808319 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:18:21.114239 sshd[2122]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:21.117732 systemd[1]: sshd@1-10.200.20.21:22-10.200.16.10:55634.service: Deactivated successfully. Sep 4 17:18:21.119225 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:18:21.119791 systemd-logind[1676]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:18:21.120961 systemd-logind[1676]: Removed session 4. Sep 4 17:18:21.201533 systemd[1]: Started sshd@2-10.200.20.21:22-10.200.16.10:55648.service - OpenSSH per-connection server daemon (10.200.16.10:55648). Sep 4 17:18:21.676939 sshd[2129]: Accepted publickey for core from 10.200.16.10 port 55648 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:18:21.678262 sshd[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:21.679188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:18:21.689317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:21.691789 systemd-logind[1676]: New session 5 of user core. Sep 4 17:18:21.695261 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:18:21.788684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:21.793043 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:18:21.829435 kubelet[2140]: E0904 17:18:21.829363 2140 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:18:21.831676 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:18:21.831799 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:18:22.016321 sshd[2129]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:22.019850 systemd[1]: sshd@2-10.200.20.21:22-10.200.16.10:55648.service: Deactivated successfully. Sep 4 17:18:22.021648 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:18:22.022550 systemd-logind[1676]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:18:22.023609 systemd-logind[1676]: Removed session 5. Sep 4 17:18:22.102019 systemd[1]: Started sshd@3-10.200.20.21:22-10.200.16.10:55650.service - OpenSSH per-connection server daemon (10.200.16.10:55650). Sep 4 17:18:22.577789 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 55650 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:18:22.579110 sshd[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:22.583803 systemd-logind[1676]: New session 6 of user core. Sep 4 17:18:22.589300 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:18:22.921939 sshd[2152]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:22.925591 systemd[1]: sshd@3-10.200.20.21:22-10.200.16.10:55650.service: Deactivated successfully. Sep 4 17:18:22.929206 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:18:22.929844 systemd-logind[1676]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:18:22.930725 systemd-logind[1676]: Removed session 6. Sep 4 17:18:23.005608 systemd[1]: Started sshd@4-10.200.20.21:22-10.200.16.10:55664.service - OpenSSH per-connection server daemon (10.200.16.10:55664). Sep 4 17:18:23.450666 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 55664 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:18:23.451956 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:23.456881 systemd-logind[1676]: New session 7 of user core. Sep 4 17:18:23.463268 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:18:23.753323 sudo[2162]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:18:23.753561 sudo[2162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:18:23.766259 sudo[2162]: pam_unix(sudo:session): session closed for user root Sep 4 17:18:23.838382 sshd[2159]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:23.842318 systemd-logind[1676]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:18:23.842938 systemd[1]: sshd@4-10.200.20.21:22-10.200.16.10:55664.service: Deactivated successfully. Sep 4 17:18:23.845625 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:18:23.846644 systemd-logind[1676]: Removed session 7. Sep 4 17:18:23.922823 systemd[1]: Started sshd@5-10.200.20.21:22-10.200.16.10:55670.service - OpenSSH per-connection server daemon (10.200.16.10:55670). Sep 4 17:18:24.404558 sshd[2167]: Accepted publickey for core from 10.200.16.10 port 55670 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:18:24.405997 sshd[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:24.410246 systemd-logind[1676]: New session 8 of user core. Sep 4 17:18:24.420224 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:18:24.673793 sudo[2171]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:18:24.674594 sudo[2171]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:18:24.678306 sudo[2171]: pam_unix(sudo:session): session closed for user root Sep 4 17:18:24.683053 sudo[2170]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:18:24.683310 sudo[2170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:18:24.694343 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:18:24.698366 auditctl[2174]: No rules Sep 4 17:18:24.698688 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:18:24.698903 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:18:24.701844 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:18:24.736064 augenrules[2192]: No rules Sep 4 17:18:24.736653 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:18:24.738137 sudo[2170]: pam_unix(sudo:session): session closed for user root Sep 4 17:18:24.824428 sshd[2167]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:24.827307 systemd-logind[1676]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:18:24.827485 systemd[1]: sshd@5-10.200.20.21:22-10.200.16.10:55670.service: Deactivated successfully. Sep 4 17:18:24.829113 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:18:24.831215 systemd-logind[1676]: Removed session 8. Sep 4 17:18:24.909566 systemd[1]: Started sshd@6-10.200.20.21:22-10.200.16.10:55674.service - OpenSSH per-connection server daemon (10.200.16.10:55674). Sep 4 17:18:25.388163 sshd[2200]: Accepted publickey for core from 10.200.16.10 port 55674 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:18:25.389441 sshd[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:18:25.394023 systemd-logind[1676]: New session 9 of user core. Sep 4 17:18:25.403370 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:18:25.657402 sudo[2203]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:18:25.657648 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:18:25.793338 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:18:25.793501 (dockerd)[2212]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:18:26.118647 dockerd[2212]: time="2024-09-04T17:18:26.118310311Z" level=info msg="Starting up" Sep 4 17:18:26.186055 dockerd[2212]: time="2024-09-04T17:18:26.186011652Z" level=info msg="Loading containers: start." Sep 4 17:18:26.674102 kernel: Initializing XFRM netlink socket Sep 4 17:18:26.737199 systemd-networkd[1599]: docker0: Link UP Sep 4 17:18:26.765455 dockerd[2212]: time="2024-09-04T17:18:26.765388314Z" level=info msg="Loading containers: done." Sep 4 17:18:26.839642 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck769488626-merged.mount: Deactivated successfully. Sep 4 17:18:26.849962 dockerd[2212]: time="2024-09-04T17:18:26.849913990Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:18:26.850242 dockerd[2212]: time="2024-09-04T17:18:26.850218151Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:18:26.850380 dockerd[2212]: time="2024-09-04T17:18:26.850359231Z" level=info msg="Daemon has completed initialization" Sep 4 17:18:26.904373 dockerd[2212]: time="2024-09-04T17:18:26.904303576Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:18:26.908208 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:18:27.777828 containerd[1713]: time="2024-09-04T17:18:27.777749790Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:18:28.769911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount6239306.mount: Deactivated successfully. Sep 4 17:18:30.415791 containerd[1713]: time="2024-09-04T17:18:30.415735825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:30.418854 containerd[1713]: time="2024-09-04T17:18:30.418808351Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=29943740" Sep 4 17:18:30.423987 containerd[1713]: time="2024-09-04T17:18:30.423935961Z" level=info msg="ImageCreate event name:\"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:30.428852 containerd[1713]: time="2024-09-04T17:18:30.428786851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:30.429914 containerd[1713]: time="2024-09-04T17:18:30.429738253Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"29940540\" in 2.651948382s" Sep 4 17:18:30.429914 containerd[1713]: time="2024-09-04T17:18:30.429777573Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\"" Sep 4 17:18:30.450669 containerd[1713]: time="2024-09-04T17:18:30.450590133Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:18:31.869922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 4 17:18:31.876441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:31.993413 (kubelet)[2411]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:18:31.993840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:32.483488 kubelet[2411]: E0904 17:18:32.483381 2411 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:18:32.485801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:18:32.485944 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:18:32.575626 containerd[1713]: time="2024-09-04T17:18:32.574605372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:32.577879 containerd[1713]: time="2024-09-04T17:18:32.577844298Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=26881132" Sep 4 17:18:32.584098 containerd[1713]: time="2024-09-04T17:18:32.582936668Z" level=info msg="ImageCreate event name:\"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:32.591113 containerd[1713]: time="2024-09-04T17:18:32.591066804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:32.592141 containerd[1713]: time="2024-09-04T17:18:32.592107806Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"28368399\" in 2.141477873s" Sep 4 17:18:32.592246 containerd[1713]: time="2024-09-04T17:18:32.592230766Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\"" Sep 4 17:18:32.612687 containerd[1713]: time="2024-09-04T17:18:32.612653326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:18:32.720204 update_engine[1679]: I0904 17:18:32.720144 1679 update_attempter.cc:509] Updating boot flags... Sep 4 17:18:32.779162 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2437) Sep 4 17:18:32.860134 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2440) Sep 4 17:18:34.275974 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Sep 4 17:18:35.069122 containerd[1713]: time="2024-09-04T17:18:35.068772134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:35.071641 containerd[1713]: time="2024-09-04T17:18:35.071433939Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=16154063" Sep 4 17:18:35.075443 containerd[1713]: time="2024-09-04T17:18:35.075398267Z" level=info msg="ImageCreate event name:\"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:35.080731 containerd[1713]: time="2024-09-04T17:18:35.080678079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:35.081806 containerd[1713]: time="2024-09-04T17:18:35.081686641Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"17641348\" in 2.468855315s" Sep 4 17:18:35.081806 containerd[1713]: time="2024-09-04T17:18:35.081719401Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\"" Sep 4 17:18:35.101126 containerd[1713]: time="2024-09-04T17:18:35.101084841Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:18:36.273228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274909719.mount: Deactivated successfully. Sep 4 17:18:36.592578 containerd[1713]: time="2024-09-04T17:18:36.592185655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:36.596306 containerd[1713]: time="2024-09-04T17:18:36.596267703Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=25646047" Sep 4 17:18:36.599546 containerd[1713]: time="2024-09-04T17:18:36.599496990Z" level=info msg="ImageCreate event name:\"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:36.603362 containerd[1713]: time="2024-09-04T17:18:36.603330238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:36.604374 containerd[1713]: time="2024-09-04T17:18:36.603881159Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"25645066\" in 1.502753358s" Sep 4 17:18:36.604374 containerd[1713]: time="2024-09-04T17:18:36.603914599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\"" Sep 4 17:18:36.623934 containerd[1713]: time="2024-09-04T17:18:36.623856041Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:18:37.307601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1429241202.mount: Deactivated successfully. Sep 4 17:18:39.087115 containerd[1713]: time="2024-09-04T17:18:39.086918057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.089424 containerd[1713]: time="2024-09-04T17:18:39.089206622Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Sep 4 17:18:39.092401 containerd[1713]: time="2024-09-04T17:18:39.092350708Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.097861 containerd[1713]: time="2024-09-04T17:18:39.097783960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.099106 containerd[1713]: time="2024-09-04T17:18:39.098963042Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.475036521s" Sep 4 17:18:39.099106 containerd[1713]: time="2024-09-04T17:18:39.098999682Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:18:39.118647 containerd[1713]: time="2024-09-04T17:18:39.118594003Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:18:39.754118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8577399.mount: Deactivated successfully. Sep 4 17:18:39.793198 containerd[1713]: time="2024-09-04T17:18:39.793145781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.796919 containerd[1713]: time="2024-09-04T17:18:39.796715948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:18:39.805705 containerd[1713]: time="2024-09-04T17:18:39.805643767Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.814510 containerd[1713]: time="2024-09-04T17:18:39.814430706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.815734 containerd[1713]: time="2024-09-04T17:18:39.815242587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 696.600583ms" Sep 4 17:18:39.815734 containerd[1713]: time="2024-09-04T17:18:39.815281347Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:18:39.835811 containerd[1713]: time="2024-09-04T17:18:39.835765710Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:18:40.586471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169622739.mount: Deactivated successfully. Sep 4 17:18:42.516246 containerd[1713]: time="2024-09-04T17:18:42.516199902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:42.523690 containerd[1713]: time="2024-09-04T17:18:42.523646434Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Sep 4 17:18:42.527729 containerd[1713]: time="2024-09-04T17:18:42.527678881Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:42.533353 containerd[1713]: time="2024-09-04T17:18:42.533285730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:42.535004 containerd[1713]: time="2024-09-04T17:18:42.534506692Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.698697501s" Sep 4 17:18:42.535004 containerd[1713]: time="2024-09-04T17:18:42.534542092Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Sep 4 17:18:42.569938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 4 17:18:42.576333 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:42.680997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:42.686152 (kubelet)[2636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:18:43.181546 kubelet[2636]: E0904 17:18:43.181445 2636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:18:43.184711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:18:43.184852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:18:48.319436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:48.329334 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:48.354228 systemd[1]: Reloading requested from client PID 2701 ('systemctl') (unit session-9.scope)... Sep 4 17:18:48.354245 systemd[1]: Reloading... Sep 4 17:18:48.438105 zram_generator::config[2735]: No configuration found. Sep 4 17:18:48.556545 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:18:48.634020 systemd[1]: Reloading finished in 279 ms. Sep 4 17:18:48.677809 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:18:48.677880 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:18:48.678406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:48.682517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:48.948765 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:48.954188 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:18:48.990883 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:18:48.990883 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:18:48.990883 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:18:48.991277 kubelet[2806]: I0904 17:18:48.990922 2806 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:18:49.732600 kubelet[2806]: I0904 17:18:49.732569 2806 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:18:49.734095 kubelet[2806]: I0904 17:18:49.732809 2806 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:18:49.734095 kubelet[2806]: I0904 17:18:49.733046 2806 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:18:49.743922 kubelet[2806]: I0904 17:18:49.743894 2806 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:18:49.744735 kubelet[2806]: E0904 17:18:49.744658 2806 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.765566 kubelet[2806]: I0904 17:18:49.765529 2806 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:18:49.767162 kubelet[2806]: I0904 17:18:49.767124 2806 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:18:49.767343 kubelet[2806]: I0904 17:18:49.767167 2806 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.2.1-a-bdc284204f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:18:49.767427 kubelet[2806]: I0904 17:18:49.767354 2806 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:18:49.767427 kubelet[2806]: I0904 17:18:49.767363 2806 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:18:49.767495 kubelet[2806]: I0904 17:18:49.767477 2806 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:18:49.768209 kubelet[2806]: I0904 17:18:49.768190 2806 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:18:49.768263 kubelet[2806]: I0904 17:18:49.768213 2806 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:18:49.768263 kubelet[2806]: I0904 17:18:49.768243 2806 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:18:49.768263 kubelet[2806]: I0904 17:18:49.768262 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:18:49.807098 kubelet[2806]: W0904 17:18:49.805898 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.807098 kubelet[2806]: E0904 17:18:49.805961 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.807098 kubelet[2806]: W0904 17:18:49.806236 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-bdc284204f&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.807098 kubelet[2806]: E0904 17:18:49.806276 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-bdc284204f&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.807098 kubelet[2806]: I0904 17:18:49.806633 2806 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:18:49.807098 kubelet[2806]: I0904 17:18:49.806827 2806 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:18:49.807098 kubelet[2806]: W0904 17:18:49.806887 2806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:18:49.808370 kubelet[2806]: I0904 17:18:49.808345 2806 server.go:1264] "Started kubelet" Sep 4 17:18:49.808882 kubelet[2806]: I0904 17:18:49.808850 2806 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:18:49.809792 kubelet[2806]: I0904 17:18:49.809771 2806 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:18:49.810260 kubelet[2806]: I0904 17:18:49.810198 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:18:49.810545 kubelet[2806]: I0904 17:18:49.810510 2806 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:18:49.810732 kubelet[2806]: E0904 17:18:49.810632 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.21:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.2.1-a-bdc284204f.17f21a1f7eb37ce9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.1-a-bdc284204f,UID:ci-3975.2.1-a-bdc284204f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.1-a-bdc284204f,},FirstTimestamp:2024-09-04 17:18:49.808321769 +0000 UTC m=+0.851289499,LastTimestamp:2024-09-04 17:18:49.808321769 +0000 UTC m=+0.851289499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.1-a-bdc284204f,}" Sep 4 17:18:49.813454 kubelet[2806]: I0904 17:18:49.813435 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:18:49.814211 kubelet[2806]: I0904 17:18:49.814183 2806 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:18:49.814309 kubelet[2806]: I0904 17:18:49.814290 2806 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:18:49.814365 kubelet[2806]: I0904 17:18:49.814350 2806 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:18:49.814842 kubelet[2806]: W0904 17:18:49.814730 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.814842 kubelet[2806]: E0904 17:18:49.814783 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.814917 kubelet[2806]: E0904 17:18:49.814856 2806 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:18:49.815294 kubelet[2806]: E0904 17:18:49.814964 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:49.816928 kubelet[2806]: E0904 17:18:49.816550 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-bdc284204f?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="200ms" Sep 4 17:18:49.819212 kubelet[2806]: I0904 17:18:49.819182 2806 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:18:49.819212 kubelet[2806]: I0904 17:18:49.819204 2806 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:18:49.819299 kubelet[2806]: I0904 17:18:49.819270 2806 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:18:49.878349 kubelet[2806]: I0904 17:18:49.878215 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:18:49.879727 kubelet[2806]: I0904 17:18:49.879426 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:18:49.879727 kubelet[2806]: I0904 17:18:49.879461 2806 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:18:49.879727 kubelet[2806]: I0904 17:18:49.879481 2806 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:18:49.879727 kubelet[2806]: E0904 17:18:49.879518 2806 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:18:49.882546 kubelet[2806]: W0904 17:18:49.882501 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.882546 kubelet[2806]: E0904 17:18:49.882543 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:49.957094 kubelet[2806]: I0904 17:18:49.957049 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:49.957417 kubelet[2806]: E0904 17:18:49.957386 2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:49.958242 kubelet[2806]: I0904 17:18:49.958188 2806 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:18:49.958242 kubelet[2806]: I0904 17:18:49.958225 2806 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:18:49.958574 kubelet[2806]: I0904 17:18:49.958369 2806 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:18:49.980547 kubelet[2806]: E0904 17:18:49.980511 2806 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:18:50.106289 kubelet[2806]: E0904 17:18:50.017035 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-bdc284204f?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="400ms" Sep 4 17:18:50.159465 kubelet[2806]: I0904 17:18:50.159428 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:50.159951 kubelet[2806]: E0904 17:18:50.159924 2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:50.180984 kubelet[2806]: E0904 17:18:50.180955 2806 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:18:50.263865 kubelet[2806]: I0904 17:18:50.263530 2806 policy_none.go:49] "None policy: Start" Sep 4 17:18:50.264566 kubelet[2806]: I0904 17:18:50.264491 2806 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:18:50.265002 kubelet[2806]: I0904 17:18:50.264699 2806 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:18:50.417656 kubelet[2806]: E0904 17:18:50.417612 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-bdc284204f?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="800ms" Sep 4 17:18:50.561904 kubelet[2806]: I0904 17:18:50.561852 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:50.562211 kubelet[2806]: E0904 17:18:50.562184 2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:50.581390 kubelet[2806]: E0904 17:18:50.581355 2806 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:18:50.692340 kubelet[2806]: W0904 17:18:50.692201 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:50.692340 kubelet[2806]: E0904 17:18:50.692260 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:50.915117 kubelet[2806]: W0904 17:18:50.915029 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-bdc284204f&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:50.915117 kubelet[2806]: E0904 17:18:50.915114 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-bdc284204f&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:51.021596 kubelet[2806]: W0904 17:18:50.993616 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:51.021596 kubelet[2806]: E0904 17:18:50.993668 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:51.026754 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:18:51.036168 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:18:51.040109 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:18:51.050856 kubelet[2806]: I0904 17:18:51.050825 2806 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:18:51.051337 kubelet[2806]: I0904 17:18:51.051024 2806 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:18:51.051337 kubelet[2806]: I0904 17:18:51.051134 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:18:51.052655 kubelet[2806]: E0904 17:18:51.052639 2806 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:51.218331 kubelet[2806]: E0904 17:18:51.218290 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-bdc284204f?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="1.6s" Sep 4 17:18:51.221631 kubelet[2806]: W0904 17:18:51.221605 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:51.221684 kubelet[2806]: E0904 17:18:51.221641 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:51.364220 kubelet[2806]: I0904 17:18:51.363893 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.364315 kubelet[2806]: E0904 17:18:51.364237 2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.382487 kubelet[2806]: I0904 17:18:51.382447 2806 topology_manager.go:215] "Topology Admit Handler" podUID="2cb83bb4d08efd25269fee92a8f99be8" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.383985 kubelet[2806]: I0904 17:18:51.383949 2806 topology_manager.go:215] "Topology Admit Handler" podUID="2f32691de32ef50f6f68b76bbbb59a96" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.385422 kubelet[2806]: I0904 17:18:51.385240 2806 topology_manager.go:215] "Topology Admit Handler" podUID="d654bc701cb5a4f63075d756bd0acdf0" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.391891 systemd[1]: Created slice kubepods-burstable-pod2cb83bb4d08efd25269fee92a8f99be8.slice - libcontainer container kubepods-burstable-pod2cb83bb4d08efd25269fee92a8f99be8.slice. Sep 4 17:18:51.412366 systemd[1]: Created slice kubepods-burstable-pod2f32691de32ef50f6f68b76bbbb59a96.slice - libcontainer container kubepods-burstable-pod2f32691de32ef50f6f68b76bbbb59a96.slice. Sep 4 17:18:51.422972 systemd[1]: Created slice kubepods-burstable-podd654bc701cb5a4f63075d756bd0acdf0.slice - libcontainer container kubepods-burstable-podd654bc701cb5a4f63075d756bd0acdf0.slice. Sep 4 17:18:51.424638 kubelet[2806]: I0904 17:18:51.424609 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d654bc701cb5a4f63075d756bd0acdf0-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-bdc284204f\" (UID: \"d654bc701cb5a4f63075d756bd0acdf0\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424715 kubelet[2806]: I0904 17:18:51.424644 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cb83bb4d08efd25269fee92a8f99be8-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-bdc284204f\" (UID: \"2cb83bb4d08efd25269fee92a8f99be8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424715 kubelet[2806]: I0904 17:18:51.424664 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424715 kubelet[2806]: I0904 17:18:51.424682 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cb83bb4d08efd25269fee92a8f99be8-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-bdc284204f\" (UID: \"2cb83bb4d08efd25269fee92a8f99be8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424715 kubelet[2806]: I0904 17:18:51.424697 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cb83bb4d08efd25269fee92a8f99be8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-bdc284204f\" (UID: \"2cb83bb4d08efd25269fee92a8f99be8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424715 kubelet[2806]: I0904 17:18:51.424712 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424825 kubelet[2806]: I0904 17:18:51.424728 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424825 kubelet[2806]: I0904 17:18:51.424747 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.424825 kubelet[2806]: I0904 17:18:51.424765 2806 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:51.710504 containerd[1713]: time="2024-09-04T17:18:51.710462172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-bdc284204f,Uid:2cb83bb4d08efd25269fee92a8f99be8,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:51.721875 containerd[1713]: time="2024-09-04T17:18:51.721825476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-bdc284204f,Uid:2f32691de32ef50f6f68b76bbbb59a96,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:51.726535 containerd[1713]: time="2024-09-04T17:18:51.726474006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-bdc284204f,Uid:d654bc701cb5a4f63075d756bd0acdf0,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:51.793598 kubelet[2806]: E0904 17:18:51.793567 2806 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:52.411287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1057997219.mount: Deactivated successfully. Sep 4 17:18:52.451114 containerd[1713]: time="2024-09-04T17:18:52.450950337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:52.453530 containerd[1713]: time="2024-09-04T17:18:52.453489502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:18:52.457235 containerd[1713]: time="2024-09-04T17:18:52.457194190Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:52.460814 containerd[1713]: time="2024-09-04T17:18:52.460099116Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:52.464051 containerd[1713]: time="2024-09-04T17:18:52.464015845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:18:52.468107 containerd[1713]: time="2024-09-04T17:18:52.467890373Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:52.471962 containerd[1713]: time="2024-09-04T17:18:52.471901941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:18:52.475769 containerd[1713]: time="2024-09-04T17:18:52.475734069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:18:52.476689 containerd[1713]: time="2024-09-04T17:18:52.476465791Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 749.881305ms" Sep 4 17:18:52.478861 containerd[1713]: time="2024-09-04T17:18:52.478822036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 756.794879ms" Sep 4 17:18:52.479513 containerd[1713]: time="2024-09-04T17:18:52.479471277Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 768.911104ms" Sep 4 17:18:52.735454 containerd[1713]: time="2024-09-04T17:18:52.735282498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:52.735454 containerd[1713]: time="2024-09-04T17:18:52.735347738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:52.737334 containerd[1713]: time="2024-09-04T17:18:52.735830499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:52.737334 containerd[1713]: time="2024-09-04T17:18:52.736146460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:52.741052 containerd[1713]: time="2024-09-04T17:18:52.740769030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:52.741052 containerd[1713]: time="2024-09-04T17:18:52.740818110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:52.741052 containerd[1713]: time="2024-09-04T17:18:52.740845950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:52.741052 containerd[1713]: time="2024-09-04T17:18:52.740859390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:52.741587 containerd[1713]: time="2024-09-04T17:18:52.741383551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:52.741587 containerd[1713]: time="2024-09-04T17:18:52.741432951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:52.741587 containerd[1713]: time="2024-09-04T17:18:52.741450431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:52.741587 containerd[1713]: time="2024-09-04T17:18:52.741462591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:52.781254 systemd[1]: Started cri-containerd-30865accc8426de317855561b82374bcb01c6228ee37fbc8128bfb86335b2902.scope - libcontainer container 30865accc8426de317855561b82374bcb01c6228ee37fbc8128bfb86335b2902. Sep 4 17:18:52.782858 systemd[1]: Started cri-containerd-c31d034ba1fbe1cef859dfedd48b4538596497a2ac198471568e4f7c01d5daa9.scope - libcontainer container c31d034ba1fbe1cef859dfedd48b4538596497a2ac198471568e4f7c01d5daa9. Sep 4 17:18:52.783781 systemd[1]: Started cri-containerd-f14380257bb081f645ea2d4590f8b63a25101e292318f8d8ee7d873545bafe73.scope - libcontainer container f14380257bb081f645ea2d4590f8b63a25101e292318f8d8ee7d873545bafe73. Sep 4 17:18:52.819938 kubelet[2806]: E0904 17:18:52.819895 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.1-a-bdc284204f?timeout=10s\": dial tcp 10.200.20.21:6443: connect: connection refused" interval="3.2s" Sep 4 17:18:52.831780 containerd[1713]: time="2024-09-04T17:18:52.831613621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.1-a-bdc284204f,Uid:d654bc701cb5a4f63075d756bd0acdf0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f14380257bb081f645ea2d4590f8b63a25101e292318f8d8ee7d873545bafe73\"" Sep 4 17:18:52.836520 containerd[1713]: time="2024-09-04T17:18:52.836455672Z" level=info msg="CreateContainer within sandbox \"f14380257bb081f645ea2d4590f8b63a25101e292318f8d8ee7d873545bafe73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:18:52.840885 containerd[1713]: time="2024-09-04T17:18:52.840056079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.1-a-bdc284204f,Uid:2f32691de32ef50f6f68b76bbbb59a96,Namespace:kube-system,Attempt:0,} returns sandbox id \"c31d034ba1fbe1cef859dfedd48b4538596497a2ac198471568e4f7c01d5daa9\"" Sep 4 17:18:52.846029 containerd[1713]: time="2024-09-04T17:18:52.845984732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.1-a-bdc284204f,Uid:2cb83bb4d08efd25269fee92a8f99be8,Namespace:kube-system,Attempt:0,} returns sandbox id \"30865accc8426de317855561b82374bcb01c6228ee37fbc8128bfb86335b2902\"" Sep 4 17:18:52.849877 containerd[1713]: time="2024-09-04T17:18:52.849830460Z" level=info msg="CreateContainer within sandbox \"30865accc8426de317855561b82374bcb01c6228ee37fbc8128bfb86335b2902\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:18:52.850435 containerd[1713]: time="2024-09-04T17:18:52.850390021Z" level=info msg="CreateContainer within sandbox \"c31d034ba1fbe1cef859dfedd48b4538596497a2ac198471568e4f7c01d5daa9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:18:52.888866 containerd[1713]: time="2024-09-04T17:18:52.888796102Z" level=info msg="CreateContainer within sandbox \"f14380257bb081f645ea2d4590f8b63a25101e292318f8d8ee7d873545bafe73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e507b0c2553947583b7d18e4deabba487cd42b43de83f14509995c1367d2e5d7\"" Sep 4 17:18:52.890254 containerd[1713]: time="2024-09-04T17:18:52.890206305Z" level=info msg="StartContainer for \"e507b0c2553947583b7d18e4deabba487cd42b43de83f14509995c1367d2e5d7\"" Sep 4 17:18:52.912235 systemd[1]: Started cri-containerd-e507b0c2553947583b7d18e4deabba487cd42b43de83f14509995c1367d2e5d7.scope - libcontainer container e507b0c2553947583b7d18e4deabba487cd42b43de83f14509995c1367d2e5d7. Sep 4 17:18:52.913510 kubelet[2806]: W0904 17:18:52.913470 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-bdc284204f&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:52.913636 kubelet[2806]: E0904 17:18:52.913613 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.1-a-bdc284204f&limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:52.928125 containerd[1713]: time="2024-09-04T17:18:52.928042465Z" level=info msg="CreateContainer within sandbox \"c31d034ba1fbe1cef859dfedd48b4538596497a2ac198471568e4f7c01d5daa9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2360c96b51087cd4148564416eb757d1325886de4703fdda44f8d602927bb176\"" Sep 4 17:18:52.928754 containerd[1713]: time="2024-09-04T17:18:52.928723507Z" level=info msg="StartContainer for \"2360c96b51087cd4148564416eb757d1325886de4703fdda44f8d602927bb176\"" Sep 4 17:18:52.952839 containerd[1713]: time="2024-09-04T17:18:52.952782158Z" level=info msg="StartContainer for \"e507b0c2553947583b7d18e4deabba487cd42b43de83f14509995c1367d2e5d7\" returns successfully" Sep 4 17:18:52.953140 containerd[1713]: time="2024-09-04T17:18:52.952920678Z" level=info msg="CreateContainer within sandbox \"30865accc8426de317855561b82374bcb01c6228ee37fbc8128bfb86335b2902\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9a76320aa6099446c2d71c71d79967445b8a6792d352697870ea4e7afa8605d1\"" Sep 4 17:18:52.955519 containerd[1713]: time="2024-09-04T17:18:52.955483603Z" level=info msg="StartContainer for \"9a76320aa6099446c2d71c71d79967445b8a6792d352697870ea4e7afa8605d1\"" Sep 4 17:18:52.971143 kubelet[2806]: I0904 17:18:52.969033 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:52.971143 kubelet[2806]: E0904 17:18:52.969813 2806 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.21:6443/api/v1/nodes\": dial tcp 10.200.20.21:6443: connect: connection refused" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:52.975290 systemd[1]: Started cri-containerd-2360c96b51087cd4148564416eb757d1325886de4703fdda44f8d602927bb176.scope - libcontainer container 2360c96b51087cd4148564416eb757d1325886de4703fdda44f8d602927bb176. Sep 4 17:18:53.004844 systemd[1]: Started cri-containerd-9a76320aa6099446c2d71c71d79967445b8a6792d352697870ea4e7afa8605d1.scope - libcontainer container 9a76320aa6099446c2d71c71d79967445b8a6792d352697870ea4e7afa8605d1. Sep 4 17:18:53.028440 containerd[1713]: time="2024-09-04T17:18:53.028328237Z" level=info msg="StartContainer for \"2360c96b51087cd4148564416eb757d1325886de4703fdda44f8d602927bb176\" returns successfully" Sep 4 17:18:53.060152 containerd[1713]: time="2024-09-04T17:18:53.060101824Z" level=info msg="StartContainer for \"9a76320aa6099446c2d71c71d79967445b8a6792d352697870ea4e7afa8605d1\" returns successfully" Sep 4 17:18:53.110210 kubelet[2806]: W0904 17:18:53.110151 2806 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:53.110396 kubelet[2806]: E0904 17:18:53.110376 2806 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.21:6443: connect: connection refused Sep 4 17:18:55.319818 kubelet[2806]: E0904 17:18:55.319712 2806 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975.2.1-a-bdc284204f.17f21a1f7eb37ce9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.2.1-a-bdc284204f,UID:ci-3975.2.1-a-bdc284204f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.2.1-a-bdc284204f,},FirstTimestamp:2024-09-04 17:18:49.808321769 +0000 UTC m=+0.851289499,LastTimestamp:2024-09-04 17:18:49.808321769 +0000 UTC m=+0.851289499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.2.1-a-bdc284204f,}" Sep 4 17:18:55.590893 kubelet[2806]: E0904 17:18:55.590562 2806 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3975.2.1-a-bdc284204f" not found Sep 4 17:18:55.948885 kubelet[2806]: E0904 17:18:55.948855 2806 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-3975.2.1-a-bdc284204f" not found Sep 4 17:18:56.023280 kubelet[2806]: E0904 17:18:56.023218 2806 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.1-a-bdc284204f\" not found" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:56.173652 kubelet[2806]: I0904 17:18:56.173622 2806 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:56.179366 kubelet[2806]: I0904 17:18:56.179264 2806 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:56.189215 kubelet[2806]: E0904 17:18:56.189131 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.289967 kubelet[2806]: E0904 17:18:56.289851 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.390415 kubelet[2806]: E0904 17:18:56.390377 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.490899 kubelet[2806]: E0904 17:18:56.490854 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.591502 kubelet[2806]: E0904 17:18:56.591389 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.691620 kubelet[2806]: E0904 17:18:56.691487 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.792357 kubelet[2806]: E0904 17:18:56.792318 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.893113 kubelet[2806]: E0904 17:18:56.893066 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:56.993690 kubelet[2806]: E0904 17:18:56.993655 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:57.049734 systemd[1]: Reloading requested from client PID 3084 ('systemctl') (unit session-9.scope)... Sep 4 17:18:57.049752 systemd[1]: Reloading... Sep 4 17:18:57.094421 kubelet[2806]: E0904 17:18:57.094380 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:57.147163 zram_generator::config[3130]: No configuration found. Sep 4 17:18:57.194569 kubelet[2806]: E0904 17:18:57.194523 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:57.228450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:18:57.295178 kubelet[2806]: E0904 17:18:57.295134 2806 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3975.2.1-a-bdc284204f\" not found" Sep 4 17:18:57.317016 systemd[1]: Reloading finished in 266 ms. Sep 4 17:18:57.354048 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:57.370099 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:18:57.370341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:57.370393 systemd[1]: kubelet.service: Consumed 1.103s CPU time, 110.9M memory peak, 0B memory swap peak. Sep 4 17:18:57.375284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:18:57.468824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:18:57.475415 (kubelet)[3185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:18:57.529015 kubelet[3185]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:18:57.529015 kubelet[3185]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:18:57.529015 kubelet[3185]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:18:57.529015 kubelet[3185]: I0904 17:18:57.527874 3185 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:18:57.532911 kubelet[3185]: I0904 17:18:57.532886 3185 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:18:57.533039 kubelet[3185]: I0904 17:18:57.533029 3185 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:18:57.533316 kubelet[3185]: I0904 17:18:57.533300 3185 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:18:57.534672 kubelet[3185]: I0904 17:18:57.534650 3185 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:18:57.536588 kubelet[3185]: I0904 17:18:57.536564 3185 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:18:57.544260 kubelet[3185]: I0904 17:18:57.544237 3185 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:18:57.544459 kubelet[3185]: I0904 17:18:57.544431 3185 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:18:57.544631 kubelet[3185]: I0904 17:18:57.544458 3185 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.2.1-a-bdc284204f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:18:57.544716 kubelet[3185]: I0904 17:18:57.544633 3185 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:18:57.544716 kubelet[3185]: I0904 17:18:57.544642 3185 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:18:57.544716 kubelet[3185]: I0904 17:18:57.544676 3185 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:18:57.544801 kubelet[3185]: I0904 17:18:57.544772 3185 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:18:57.544801 kubelet[3185]: I0904 17:18:57.544785 3185 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:18:57.544857 kubelet[3185]: I0904 17:18:57.544821 3185 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:18:57.544857 kubelet[3185]: I0904 17:18:57.544837 3185 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:18:57.548108 kubelet[3185]: I0904 17:18:57.546616 3185 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:18:57.548108 kubelet[3185]: I0904 17:18:57.546771 3185 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:18:57.548108 kubelet[3185]: I0904 17:18:57.547132 3185 server.go:1264] "Started kubelet" Sep 4 17:18:57.551791 kubelet[3185]: I0904 17:18:57.551770 3185 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:18:57.561597 kubelet[3185]: I0904 17:18:57.561554 3185 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:18:57.562718 kubelet[3185]: I0904 17:18:57.562696 3185 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:18:57.566030 kubelet[3185]: I0904 17:18:57.565976 3185 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:18:57.566338 kubelet[3185]: I0904 17:18:57.566323 3185 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:18:57.567610 kubelet[3185]: I0904 17:18:57.567591 3185 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:18:57.571389 kubelet[3185]: I0904 17:18:57.571373 3185 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:18:57.571605 kubelet[3185]: I0904 17:18:57.571593 3185 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:18:57.576271 kubelet[3185]: I0904 17:18:57.576238 3185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:18:57.578988 kubelet[3185]: I0904 17:18:57.578953 3185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:18:57.578988 kubelet[3185]: I0904 17:18:57.578993 3185 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:18:57.579098 kubelet[3185]: I0904 17:18:57.579008 3185 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:18:57.579098 kubelet[3185]: E0904 17:18:57.579055 3185 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:18:57.584654 kubelet[3185]: I0904 17:18:57.583575 3185 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:18:57.585061 kubelet[3185]: I0904 17:18:57.585036 3185 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:18:57.589970 kubelet[3185]: E0904 17:18:57.589947 3185 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:18:57.590537 kubelet[3185]: I0904 17:18:57.590515 3185 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:18:57.628583 kubelet[3185]: I0904 17:18:57.628558 3185 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:18:57.628738 kubelet[3185]: I0904 17:18:57.628725 3185 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:18:57.628842 kubelet[3185]: I0904 17:18:57.628832 3185 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:18:57.629126 kubelet[3185]: I0904 17:18:57.629111 3185 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:18:57.629218 kubelet[3185]: I0904 17:18:57.629194 3185 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:18:57.629290 kubelet[3185]: I0904 17:18:57.629283 3185 policy_none.go:49] "None policy: Start" Sep 4 17:18:57.630151 kubelet[3185]: I0904 17:18:57.630025 3185 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:18:57.630151 kubelet[3185]: I0904 17:18:57.630126 3185 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:18:57.630275 kubelet[3185]: I0904 17:18:57.630256 3185 state_mem.go:75] "Updated machine memory state" Sep 4 17:18:57.635618 kubelet[3185]: I0904 17:18:57.635065 3185 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:18:57.635618 kubelet[3185]: I0904 17:18:57.635245 3185 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:18:57.635618 kubelet[3185]: I0904 17:18:57.635359 3185 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:18:57.672914 kubelet[3185]: I0904 17:18:57.672038 3185 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:57.680039 kubelet[3185]: I0904 17:18:57.679987 3185 topology_manager.go:215] "Topology Admit Handler" podUID="2cb83bb4d08efd25269fee92a8f99be8" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:57.680180 kubelet[3185]: I0904 17:18:57.680130 3185 topology_manager.go:215] "Topology Admit Handler" podUID="2f32691de32ef50f6f68b76bbbb59a96" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:57.680208 kubelet[3185]: I0904 17:18:57.680185 3185 topology_manager.go:215] "Topology Admit Handler" podUID="d654bc701cb5a4f63075d756bd0acdf0" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.009902 kubelet[3185]: I0904 17:18:58.009857 3185 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.010045 kubelet[3185]: I0904 17:18:58.009940 3185 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.014604 kubelet[3185]: W0904 17:18:58.014420 3185 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:18:58.014851 kubelet[3185]: W0904 17:18:58.014771 3185 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:18:58.016058 kubelet[3185]: W0904 17:18:58.016043 3185 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 4 17:18:58.104773 kubelet[3185]: I0904 17:18:58.104712 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.104773 kubelet[3185]: I0904 17:18:58.104770 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d654bc701cb5a4f63075d756bd0acdf0-kubeconfig\") pod \"kube-scheduler-ci-3975.2.1-a-bdc284204f\" (UID: \"d654bc701cb5a4f63075d756bd0acdf0\") " pod="kube-system/kube-scheduler-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.104950 kubelet[3185]: I0904 17:18:58.104834 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2cb83bb4d08efd25269fee92a8f99be8-k8s-certs\") pod \"kube-apiserver-ci-3975.2.1-a-bdc284204f\" (UID: \"2cb83bb4d08efd25269fee92a8f99be8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.105013 kubelet[3185]: I0904 17:18:58.104853 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2cb83bb4d08efd25269fee92a8f99be8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.1-a-bdc284204f\" (UID: \"2cb83bb4d08efd25269fee92a8f99be8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.105042 kubelet[3185]: I0904 17:18:58.105020 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-ca-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.105042 kubelet[3185]: I0904 17:18:58.105040 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.127690 kubelet[3185]: I0904 17:18:58.105055 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.127690 kubelet[3185]: I0904 17:18:58.105132 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f32691de32ef50f6f68b76bbbb59a96-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.1-a-bdc284204f\" (UID: \"2f32691de32ef50f6f68b76bbbb59a96\") " pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.127690 kubelet[3185]: I0904 17:18:58.105193 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2cb83bb4d08efd25269fee92a8f99be8-ca-certs\") pod \"kube-apiserver-ci-3975.2.1-a-bdc284204f\" (UID: \"2cb83bb4d08efd25269fee92a8f99be8\") " pod="kube-system/kube-apiserver-ci-3975.2.1-a-bdc284204f" Sep 4 17:18:58.545851 kubelet[3185]: I0904 17:18:58.545807 3185 apiserver.go:52] "Watching apiserver" Sep 4 17:18:58.572727 kubelet[3185]: I0904 17:18:58.571981 3185 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:18:58.699965 kubelet[3185]: I0904 17:18:58.699877 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.1-a-bdc284204f" podStartSLOduration=0.699843101 podStartE2EDuration="699.843101ms" podCreationTimestamp="2024-09-04 17:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:58.674874737 +0000 UTC m=+1.196068256" watchObservedRunningTime="2024-09-04 17:18:58.699843101 +0000 UTC m=+1.221036660" Sep 4 17:18:58.724160 kubelet[3185]: I0904 17:18:58.724090 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.1-a-bdc284204f" podStartSLOduration=0.724059224 podStartE2EDuration="724.059224ms" podCreationTimestamp="2024-09-04 17:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:58.723776983 +0000 UTC m=+1.244970542" watchObservedRunningTime="2024-09-04 17:18:58.724059224 +0000 UTC m=+1.245252783" Sep 4 17:18:58.724725 kubelet[3185]: I0904 17:18:58.724176 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.2.1-a-bdc284204f" podStartSLOduration=0.724171424 podStartE2EDuration="724.171424ms" podCreationTimestamp="2024-09-04 17:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:58.700530783 +0000 UTC m=+1.221724342" watchObservedRunningTime="2024-09-04 17:18:58.724171424 +0000 UTC m=+1.245364983" Sep 4 17:19:02.324966 sudo[2203]: pam_unix(sudo:session): session closed for user root Sep 4 17:19:02.410796 sshd[2200]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:02.414185 systemd-logind[1676]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:19:02.414339 systemd[1]: sshd@6-10.200.20.21:22-10.200.16.10:55674.service: Deactivated successfully. Sep 4 17:19:02.415806 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:19:02.415965 systemd[1]: session-9.scope: Consumed 6.721s CPU time, 136.3M memory peak, 0B memory swap peak. Sep 4 17:19:02.417907 systemd-logind[1676]: Removed session 9. Sep 4 17:19:13.544822 kubelet[3185]: I0904 17:19:13.544786 3185 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:19:13.546505 kubelet[3185]: I0904 17:19:13.545397 3185 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:19:13.546538 containerd[1713]: time="2024-09-04T17:19:13.545230556Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:19:14.367341 kubelet[3185]: I0904 17:19:14.367227 3185 topology_manager.go:215] "Topology Admit Handler" podUID="82432d33-dd68-4a63-8580-8f7da7fcf625" podNamespace="kube-system" podName="kube-proxy-rpcdf" Sep 4 17:19:14.378406 systemd[1]: Created slice kubepods-besteffort-pod82432d33_dd68_4a63_8580_8f7da7fcf625.slice - libcontainer container kubepods-besteffort-pod82432d33_dd68_4a63_8580_8f7da7fcf625.slice. Sep 4 17:19:14.499494 kubelet[3185]: I0904 17:19:14.499417 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82432d33-dd68-4a63-8580-8f7da7fcf625-kube-proxy\") pod \"kube-proxy-rpcdf\" (UID: \"82432d33-dd68-4a63-8580-8f7da7fcf625\") " pod="kube-system/kube-proxy-rpcdf" Sep 4 17:19:14.499494 kubelet[3185]: I0904 17:19:14.499460 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82432d33-dd68-4a63-8580-8f7da7fcf625-lib-modules\") pod \"kube-proxy-rpcdf\" (UID: \"82432d33-dd68-4a63-8580-8f7da7fcf625\") " pod="kube-system/kube-proxy-rpcdf" Sep 4 17:19:14.499494 kubelet[3185]: I0904 17:19:14.499481 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82432d33-dd68-4a63-8580-8f7da7fcf625-xtables-lock\") pod \"kube-proxy-rpcdf\" (UID: \"82432d33-dd68-4a63-8580-8f7da7fcf625\") " pod="kube-system/kube-proxy-rpcdf" Sep 4 17:19:14.499661 kubelet[3185]: I0904 17:19:14.499497 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvqg7\" (UniqueName: \"kubernetes.io/projected/82432d33-dd68-4a63-8580-8f7da7fcf625-kube-api-access-xvqg7\") pod \"kube-proxy-rpcdf\" (UID: \"82432d33-dd68-4a63-8580-8f7da7fcf625\") " pod="kube-system/kube-proxy-rpcdf" Sep 4 17:19:14.686860 containerd[1713]: time="2024-09-04T17:19:14.686811901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpcdf,Uid:82432d33-dd68-4a63-8580-8f7da7fcf625,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:14.689960 kubelet[3185]: I0904 17:19:14.688790 3185 topology_manager.go:215] "Topology Admit Handler" podUID="424b9642-6504-4feb-a28c-8306143d4b3c" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-jrqp5" Sep 4 17:19:14.698999 systemd[1]: Created slice kubepods-besteffort-pod424b9642_6504_4feb_a28c_8306143d4b3c.slice - libcontainer container kubepods-besteffort-pod424b9642_6504_4feb_a28c_8306143d4b3c.slice. Sep 4 17:19:14.758561 containerd[1713]: time="2024-09-04T17:19:14.758353357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:14.758561 containerd[1713]: time="2024-09-04T17:19:14.758413718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:14.758561 containerd[1713]: time="2024-09-04T17:19:14.758431798Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:14.758561 containerd[1713]: time="2024-09-04T17:19:14.758445598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:14.778214 systemd[1]: Started cri-containerd-100c0e4593c09e955bd75281648407ced2eb07f780fb6e93025b506fdbea945e.scope - libcontainer container 100c0e4593c09e955bd75281648407ced2eb07f780fb6e93025b506fdbea945e. Sep 4 17:19:14.795087 containerd[1713]: time="2024-09-04T17:19:14.795008988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rpcdf,Uid:82432d33-dd68-4a63-8580-8f7da7fcf625,Namespace:kube-system,Attempt:0,} returns sandbox id \"100c0e4593c09e955bd75281648407ced2eb07f780fb6e93025b506fdbea945e\"" Sep 4 17:19:14.799023 containerd[1713]: time="2024-09-04T17:19:14.798970155Z" level=info msg="CreateContainer within sandbox \"100c0e4593c09e955bd75281648407ced2eb07f780fb6e93025b506fdbea945e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:19:14.801127 kubelet[3185]: I0904 17:19:14.801014 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/424b9642-6504-4feb-a28c-8306143d4b3c-var-lib-calico\") pod \"tigera-operator-77f994b5bb-jrqp5\" (UID: \"424b9642-6504-4feb-a28c-8306143d4b3c\") " pod="tigera-operator/tigera-operator-77f994b5bb-jrqp5" Sep 4 17:19:14.801127 kubelet[3185]: I0904 17:19:14.801063 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcjr2\" (UniqueName: \"kubernetes.io/projected/424b9642-6504-4feb-a28c-8306143d4b3c-kube-api-access-tcjr2\") pod \"tigera-operator-77f994b5bb-jrqp5\" (UID: \"424b9642-6504-4feb-a28c-8306143d4b3c\") " pod="tigera-operator/tigera-operator-77f994b5bb-jrqp5" Sep 4 17:19:14.850046 containerd[1713]: time="2024-09-04T17:19:14.850000733Z" level=info msg="CreateContainer within sandbox \"100c0e4593c09e955bd75281648407ced2eb07f780fb6e93025b506fdbea945e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"edb4728292422aea5e75ef21b987de3bb15f34fe0f23158b1b61e08d783f8525\"" Sep 4 17:19:14.851481 containerd[1713]: time="2024-09-04T17:19:14.851187695Z" level=info msg="StartContainer for \"edb4728292422aea5e75ef21b987de3bb15f34fe0f23158b1b61e08d783f8525\"" Sep 4 17:19:14.878219 systemd[1]: Started cri-containerd-edb4728292422aea5e75ef21b987de3bb15f34fe0f23158b1b61e08d783f8525.scope - libcontainer container edb4728292422aea5e75ef21b987de3bb15f34fe0f23158b1b61e08d783f8525. Sep 4 17:19:14.909773 containerd[1713]: time="2024-09-04T17:19:14.909693078Z" level=info msg="StartContainer for \"edb4728292422aea5e75ef21b987de3bb15f34fe0f23158b1b61e08d783f8525\" returns successfully" Sep 4 17:19:15.002769 containerd[1713]: time="2024-09-04T17:19:15.002643340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-jrqp5,Uid:424b9642-6504-4feb-a28c-8306143d4b3c,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:19:15.044912 containerd[1713]: time="2024-09-04T17:19:15.044385872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:15.044912 containerd[1713]: time="2024-09-04T17:19:15.044756715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:15.044912 containerd[1713]: time="2024-09-04T17:19:15.044822716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:15.045137 containerd[1713]: time="2024-09-04T17:19:15.044848916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:15.063303 systemd[1]: Started cri-containerd-38621da80565c0eaa85e9a044f5086218ef09d376f3bcd45ff89a2b60e5536f0.scope - libcontainer container 38621da80565c0eaa85e9a044f5086218ef09d376f3bcd45ff89a2b60e5536f0. Sep 4 17:19:15.099155 containerd[1713]: time="2024-09-04T17:19:15.099110949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-jrqp5,Uid:424b9642-6504-4feb-a28c-8306143d4b3c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"38621da80565c0eaa85e9a044f5086218ef09d376f3bcd45ff89a2b60e5536f0\"" Sep 4 17:19:15.100692 containerd[1713]: time="2024-09-04T17:19:15.100594441Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:19:15.614878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3309872675.mount: Deactivated successfully. Sep 4 17:19:15.651982 kubelet[3185]: I0904 17:19:15.651931 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rpcdf" podStartSLOduration=1.651915556 podStartE2EDuration="1.651915556s" podCreationTimestamp="2024-09-04 17:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:15.651492313 +0000 UTC m=+18.172685872" watchObservedRunningTime="2024-09-04 17:19:15.651915556 +0000 UTC m=+18.173109115" Sep 4 17:19:16.784410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405970225.mount: Deactivated successfully. Sep 4 17:19:17.179516 containerd[1713]: time="2024-09-04T17:19:17.179467575Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:17.184154 containerd[1713]: time="2024-09-04T17:19:17.184119932Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485935" Sep 4 17:19:17.191011 containerd[1713]: time="2024-09-04T17:19:17.190974387Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:17.202420 containerd[1713]: time="2024-09-04T17:19:17.202371158Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:17.203297 containerd[1713]: time="2024-09-04T17:19:17.202796041Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.10216104s" Sep 4 17:19:17.203297 containerd[1713]: time="2024-09-04T17:19:17.202838882Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:19:17.205220 containerd[1713]: time="2024-09-04T17:19:17.205144540Z" level=info msg="CreateContainer within sandbox \"38621da80565c0eaa85e9a044f5086218ef09d376f3bcd45ff89a2b60e5536f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:19:17.255192 containerd[1713]: time="2024-09-04T17:19:17.255156739Z" level=info msg="CreateContainer within sandbox \"38621da80565c0eaa85e9a044f5086218ef09d376f3bcd45ff89a2b60e5536f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"79e3d3716ecd0d098bc4b00a2cd6269ebac4d0747081923d19eb2cce37394e89\"" Sep 4 17:19:17.256037 containerd[1713]: time="2024-09-04T17:19:17.255966545Z" level=info msg="StartContainer for \"79e3d3716ecd0d098bc4b00a2cd6269ebac4d0747081923d19eb2cce37394e89\"" Sep 4 17:19:17.278223 systemd[1]: Started cri-containerd-79e3d3716ecd0d098bc4b00a2cd6269ebac4d0747081923d19eb2cce37394e89.scope - libcontainer container 79e3d3716ecd0d098bc4b00a2cd6269ebac4d0747081923d19eb2cce37394e89. Sep 4 17:19:17.305057 containerd[1713]: time="2024-09-04T17:19:17.305009776Z" level=info msg="StartContainer for \"79e3d3716ecd0d098bc4b00a2cd6269ebac4d0747081923d19eb2cce37394e89\" returns successfully" Sep 4 17:19:17.655765 kubelet[3185]: I0904 17:19:17.655692 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-jrqp5" podStartSLOduration=1.5522364020000001 podStartE2EDuration="3.655673732s" podCreationTimestamp="2024-09-04 17:19:14 +0000 UTC" firstStartedPulling="2024-09-04 17:19:15.100239558 +0000 UTC m=+17.621433077" lastFinishedPulling="2024-09-04 17:19:17.203676848 +0000 UTC m=+19.724870407" observedRunningTime="2024-09-04 17:19:17.655540251 +0000 UTC m=+20.176733810" watchObservedRunningTime="2024-09-04 17:19:17.655673732 +0000 UTC m=+20.176867291" Sep 4 17:19:21.318107 kubelet[3185]: I0904 17:19:21.316114 3185 topology_manager.go:215] "Topology Admit Handler" podUID="b95c6ad6-d294-4222-8db5-9f8f4ba757b7" podNamespace="calico-system" podName="calico-typha-747c6648cd-5lsz8" Sep 4 17:19:21.324686 systemd[1]: Created slice kubepods-besteffort-podb95c6ad6_d294_4222_8db5_9f8f4ba757b7.slice - libcontainer container kubepods-besteffort-podb95c6ad6_d294_4222_8db5_9f8f4ba757b7.slice. Sep 4 17:19:21.433033 kubelet[3185]: I0904 17:19:21.432986 3185 topology_manager.go:215] "Topology Admit Handler" podUID="47dc644a-5d67-41eb-b91a-938a22110240" podNamespace="calico-system" podName="calico-node-mhfhl" Sep 4 17:19:21.441055 systemd[1]: Created slice kubepods-besteffort-pod47dc644a_5d67_41eb_b91a_938a22110240.slice - libcontainer container kubepods-besteffort-pod47dc644a_5d67_41eb_b91a_938a22110240.slice. Sep 4 17:19:21.443124 kubelet[3185]: I0904 17:19:21.443066 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnps4\" (UniqueName: \"kubernetes.io/projected/b95c6ad6-d294-4222-8db5-9f8f4ba757b7-kube-api-access-vnps4\") pod \"calico-typha-747c6648cd-5lsz8\" (UID: \"b95c6ad6-d294-4222-8db5-9f8f4ba757b7\") " pod="calico-system/calico-typha-747c6648cd-5lsz8" Sep 4 17:19:21.443285 kubelet[3185]: I0904 17:19:21.443126 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b95c6ad6-d294-4222-8db5-9f8f4ba757b7-typha-certs\") pod \"calico-typha-747c6648cd-5lsz8\" (UID: \"b95c6ad6-d294-4222-8db5-9f8f4ba757b7\") " pod="calico-system/calico-typha-747c6648cd-5lsz8" Sep 4 17:19:21.443285 kubelet[3185]: I0904 17:19:21.443146 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b95c6ad6-d294-4222-8db5-9f8f4ba757b7-tigera-ca-bundle\") pod \"calico-typha-747c6648cd-5lsz8\" (UID: \"b95c6ad6-d294-4222-8db5-9f8f4ba757b7\") " pod="calico-system/calico-typha-747c6648cd-5lsz8" Sep 4 17:19:21.538266 kubelet[3185]: I0904 17:19:21.538020 3185 topology_manager.go:215] "Topology Admit Handler" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" podNamespace="calico-system" podName="csi-node-driver-v67mp" Sep 4 17:19:21.539089 kubelet[3185]: E0904 17:19:21.538803 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v67mp" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" Sep 4 17:19:21.543479 kubelet[3185]: I0904 17:19:21.543333 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47dc644a-5d67-41eb-b91a-938a22110240-tigera-ca-bundle\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.543479 kubelet[3185]: I0904 17:19:21.543367 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-var-lib-calico\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.543479 kubelet[3185]: I0904 17:19:21.543387 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-cni-bin-dir\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544099 kubelet[3185]: I0904 17:19:21.543650 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-cni-log-dir\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544099 kubelet[3185]: I0904 17:19:21.543679 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-cni-net-dir\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544099 kubelet[3185]: I0904 17:19:21.543698 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/47dc644a-5d67-41eb-b91a-938a22110240-node-certs\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544099 kubelet[3185]: I0904 17:19:21.543712 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-var-run-calico\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544099 kubelet[3185]: I0904 17:19:21.543729 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-xtables-lock\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544246 kubelet[3185]: I0904 17:19:21.543748 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-policysync\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544246 kubelet[3185]: I0904 17:19:21.543765 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckcl\" (UniqueName: \"kubernetes.io/projected/47dc644a-5d67-41eb-b91a-938a22110240-kube-api-access-qckcl\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544246 kubelet[3185]: I0904 17:19:21.543792 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-lib-modules\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.544246 kubelet[3185]: I0904 17:19:21.543807 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/47dc644a-5d67-41eb-b91a-938a22110240-flexvol-driver-host\") pod \"calico-node-mhfhl\" (UID: \"47dc644a-5d67-41eb-b91a-938a22110240\") " pod="calico-system/calico-node-mhfhl" Sep 4 17:19:21.629231 containerd[1713]: time="2024-09-04T17:19:21.629158018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747c6648cd-5lsz8,Uid:b95c6ad6-d294-4222-8db5-9f8f4ba757b7,Namespace:calico-system,Attempt:0,}" Sep 4 17:19:21.644446 kubelet[3185]: I0904 17:19:21.644397 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rrc\" (UniqueName: \"kubernetes.io/projected/088eb567-a576-4f83-a123-2327acb5e8ca-kube-api-access-28rrc\") pod \"csi-node-driver-v67mp\" (UID: \"088eb567-a576-4f83-a123-2327acb5e8ca\") " pod="calico-system/csi-node-driver-v67mp" Sep 4 17:19:21.644955 kubelet[3185]: I0904 17:19:21.644610 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/088eb567-a576-4f83-a123-2327acb5e8ca-socket-dir\") pod \"csi-node-driver-v67mp\" (UID: \"088eb567-a576-4f83-a123-2327acb5e8ca\") " pod="calico-system/csi-node-driver-v67mp" Sep 4 17:19:21.644955 kubelet[3185]: I0904 17:19:21.644636 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/088eb567-a576-4f83-a123-2327acb5e8ca-registration-dir\") pod \"csi-node-driver-v67mp\" (UID: \"088eb567-a576-4f83-a123-2327acb5e8ca\") " pod="calico-system/csi-node-driver-v67mp" Sep 4 17:19:21.645282 kubelet[3185]: I0904 17:19:21.645134 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/088eb567-a576-4f83-a123-2327acb5e8ca-varrun\") pod \"csi-node-driver-v67mp\" (UID: \"088eb567-a576-4f83-a123-2327acb5e8ca\") " pod="calico-system/csi-node-driver-v67mp" Sep 4 17:19:21.645282 kubelet[3185]: I0904 17:19:21.645249 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/088eb567-a576-4f83-a123-2327acb5e8ca-kubelet-dir\") pod \"csi-node-driver-v67mp\" (UID: \"088eb567-a576-4f83-a123-2327acb5e8ca\") " pod="calico-system/csi-node-driver-v67mp" Sep 4 17:19:21.653225 kubelet[3185]: E0904 17:19:21.648575 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.653225 kubelet[3185]: W0904 17:19:21.648604 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.653225 kubelet[3185]: E0904 17:19:21.648635 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.653835 kubelet[3185]: E0904 17:19:21.653821 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.654001 kubelet[3185]: W0904 17:19:21.653867 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.654711 kubelet[3185]: E0904 17:19:21.654037 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.655926 kubelet[3185]: E0904 17:19:21.655800 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.655926 kubelet[3185]: W0904 17:19:21.655817 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.656333 kubelet[3185]: E0904 17:19:21.656281 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.656333 kubelet[3185]: W0904 17:19:21.656294 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.656738 kubelet[3185]: E0904 17:19:21.656643 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.656738 kubelet[3185]: W0904 17:19:21.656654 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.656738 kubelet[3185]: E0904 17:19:21.656698 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.656738 kubelet[3185]: E0904 17:19:21.656711 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.669576 kubelet[3185]: E0904 17:19:21.656755 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.669576 kubelet[3185]: E0904 17:19:21.657276 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.669576 kubelet[3185]: W0904 17:19:21.657291 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.669576 kubelet[3185]: E0904 17:19:21.657392 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.669576 kubelet[3185]: E0904 17:19:21.657996 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.669576 kubelet[3185]: W0904 17:19:21.658009 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.669576 kubelet[3185]: E0904 17:19:21.658032 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.669576 kubelet[3185]: E0904 17:19:21.658225 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.669576 kubelet[3185]: W0904 17:19:21.658234 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.669576 kubelet[3185]: E0904 17:19:21.658249 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.670153 kubelet[3185]: E0904 17:19:21.658483 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.670153 kubelet[3185]: W0904 17:19:21.658493 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.670153 kubelet[3185]: E0904 17:19:21.658512 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.670153 kubelet[3185]: E0904 17:19:21.658717 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.670153 kubelet[3185]: W0904 17:19:21.658727 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.670153 kubelet[3185]: E0904 17:19:21.658740 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.670153 kubelet[3185]: E0904 17:19:21.658896 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.670153 kubelet[3185]: W0904 17:19:21.658903 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.670153 kubelet[3185]: E0904 17:19:21.658914 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.670153 kubelet[3185]: E0904 17:19:21.659115 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.670507 kubelet[3185]: W0904 17:19:21.659123 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.670507 kubelet[3185]: E0904 17:19:21.659134 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.670507 kubelet[3185]: E0904 17:19:21.659303 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.670507 kubelet[3185]: W0904 17:19:21.659311 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.670507 kubelet[3185]: E0904 17:19:21.659320 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.670507 kubelet[3185]: E0904 17:19:21.659534 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.670507 kubelet[3185]: W0904 17:19:21.659542 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.670507 kubelet[3185]: E0904 17:19:21.659550 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.675650 kubelet[3185]: E0904 17:19:21.675498 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.675650 kubelet[3185]: W0904 17:19:21.675514 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.675650 kubelet[3185]: E0904 17:19:21.675531 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.703785 containerd[1713]: time="2024-09-04T17:19:21.703303024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:21.703785 containerd[1713]: time="2024-09-04T17:19:21.703360824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:21.703785 containerd[1713]: time="2024-09-04T17:19:21.703378584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:21.703785 containerd[1713]: time="2024-09-04T17:19:21.703391624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:21.720231 systemd[1]: Started cri-containerd-41d9de9cd6078097ed883ce47f8014cec428369cbd35f8d628fb73841031d426.scope - libcontainer container 41d9de9cd6078097ed883ce47f8014cec428369cbd35f8d628fb73841031d426. Sep 4 17:19:21.747214 kubelet[3185]: E0904 17:19:21.746272 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.747214 kubelet[3185]: W0904 17:19:21.746295 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.747214 kubelet[3185]: E0904 17:19:21.746523 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.747362 containerd[1713]: time="2024-09-04T17:19:21.746372921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mhfhl,Uid:47dc644a-5d67-41eb-b91a-938a22110240,Namespace:calico-system,Attempt:0,}" Sep 4 17:19:21.750055 kubelet[3185]: E0904 17:19:21.750025 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.750055 kubelet[3185]: W0904 17:19:21.750046 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.750513 kubelet[3185]: E0904 17:19:21.750069 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.750570 kubelet[3185]: E0904 17:19:21.750555 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.750570 kubelet[3185]: W0904 17:19:21.750564 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.750627 kubelet[3185]: E0904 17:19:21.750583 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.750935 kubelet[3185]: E0904 17:19:21.750906 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.750935 kubelet[3185]: W0904 17:19:21.750925 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.750935 kubelet[3185]: E0904 17:19:21.750943 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.751231 kubelet[3185]: E0904 17:19:21.751207 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.751231 kubelet[3185]: W0904 17:19:21.751223 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.751231 kubelet[3185]: E0904 17:19:21.751239 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753027 kubelet[3185]: E0904 17:19:21.751706 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753027 kubelet[3185]: W0904 17:19:21.751728 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753027 kubelet[3185]: E0904 17:19:21.751746 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753027 kubelet[3185]: E0904 17:19:21.751953 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753027 kubelet[3185]: W0904 17:19:21.751962 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753027 kubelet[3185]: E0904 17:19:21.751973 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753027 kubelet[3185]: E0904 17:19:21.752194 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753027 kubelet[3185]: W0904 17:19:21.752203 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753027 kubelet[3185]: E0904 17:19:21.752212 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753027 kubelet[3185]: E0904 17:19:21.752470 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753293 kubelet[3185]: W0904 17:19:21.752479 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753293 kubelet[3185]: E0904 17:19:21.752513 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753293 kubelet[3185]: E0904 17:19:21.752722 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753293 kubelet[3185]: W0904 17:19:21.752730 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753293 kubelet[3185]: E0904 17:19:21.752741 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753293 kubelet[3185]: E0904 17:19:21.752894 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753293 kubelet[3185]: W0904 17:19:21.752901 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753293 kubelet[3185]: E0904 17:19:21.752925 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753293 kubelet[3185]: E0904 17:19:21.753052 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753293 kubelet[3185]: W0904 17:19:21.753059 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753879 kubelet[3185]: E0904 17:19:21.753098 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.753879 kubelet[3185]: E0904 17:19:21.753362 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.753879 kubelet[3185]: W0904 17:19:21.753371 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.753879 kubelet[3185]: E0904 17:19:21.753380 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.755345 kubelet[3185]: E0904 17:19:21.755279 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.755345 kubelet[3185]: W0904 17:19:21.755295 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.755659 kubelet[3185]: E0904 17:19:21.755628 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.756122 kubelet[3185]: E0904 17:19:21.755958 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.756122 kubelet[3185]: W0904 17:19:21.755975 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.756568 kubelet[3185]: E0904 17:19:21.756134 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.756811 kubelet[3185]: E0904 17:19:21.756782 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.756811 kubelet[3185]: W0904 17:19:21.756797 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.757298 kubelet[3185]: E0904 17:19:21.757002 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.757298 kubelet[3185]: E0904 17:19:21.757242 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.757298 kubelet[3185]: W0904 17:19:21.757254 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.758845 kubelet[3185]: E0904 17:19:21.757374 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.758845 kubelet[3185]: E0904 17:19:21.757501 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.758845 kubelet[3185]: W0904 17:19:21.757510 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.758845 kubelet[3185]: E0904 17:19:21.757536 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.758845 kubelet[3185]: E0904 17:19:21.757750 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.758845 kubelet[3185]: W0904 17:19:21.757763 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.758845 kubelet[3185]: E0904 17:19:21.757782 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.759208 kubelet[3185]: E0904 17:19:21.759191 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.759402 kubelet[3185]: W0904 17:19:21.759273 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.759402 kubelet[3185]: E0904 17:19:21.759305 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.760233 kubelet[3185]: E0904 17:19:21.760136 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.760233 kubelet[3185]: W0904 17:19:21.760149 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.760233 kubelet[3185]: E0904 17:19:21.760185 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.761437 kubelet[3185]: E0904 17:19:21.761414 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.761977 kubelet[3185]: W0904 17:19:21.761522 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.761977 kubelet[3185]: E0904 17:19:21.761550 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.763149 kubelet[3185]: E0904 17:19:21.763133 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.763336 kubelet[3185]: W0904 17:19:21.763231 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.763336 kubelet[3185]: E0904 17:19:21.763268 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.763532 kubelet[3185]: E0904 17:19:21.763517 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.763613 kubelet[3185]: W0904 17:19:21.763600 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.763683 kubelet[3185]: E0904 17:19:21.763672 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.763985 containerd[1713]: time="2024-09-04T17:19:21.763789720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-747c6648cd-5lsz8,Uid:b95c6ad6-d294-4222-8db5-9f8f4ba757b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"41d9de9cd6078097ed883ce47f8014cec428369cbd35f8d628fb73841031d426\"" Sep 4 17:19:21.766098 kubelet[3185]: E0904 17:19:21.764550 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.766098 kubelet[3185]: W0904 17:19:21.764567 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.766098 kubelet[3185]: E0904 17:19:21.764580 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.766392 containerd[1713]: time="2024-09-04T17:19:21.766269526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:19:21.776515 kubelet[3185]: E0904 17:19:21.776497 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:21.776645 kubelet[3185]: W0904 17:19:21.776598 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:21.776645 kubelet[3185]: E0904 17:19:21.776618 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:21.811932 containerd[1713]: time="2024-09-04T17:19:21.811791988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:21.812235 containerd[1713]: time="2024-09-04T17:19:21.812062148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:21.812402 containerd[1713]: time="2024-09-04T17:19:21.812223389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:21.812504 containerd[1713]: time="2024-09-04T17:19:21.812365469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:21.831314 systemd[1]: Started cri-containerd-da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f.scope - libcontainer container da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f. Sep 4 17:19:21.858261 containerd[1713]: time="2024-09-04T17:19:21.858207732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mhfhl,Uid:47dc644a-5d67-41eb-b91a-938a22110240,Namespace:calico-system,Attempt:0,} returns sandbox id \"da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f\"" Sep 4 17:19:23.581198 kubelet[3185]: E0904 17:19:23.580340 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v67mp" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" Sep 4 17:19:23.631538 containerd[1713]: time="2024-09-04T17:19:23.631485887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:23.634150 containerd[1713]: time="2024-09-04T17:19:23.634109063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:19:23.639089 containerd[1713]: time="2024-09-04T17:19:23.639026055Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:23.646411 containerd[1713]: time="2024-09-04T17:19:23.646373182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:23.647179 containerd[1713]: time="2024-09-04T17:19:23.647043626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 1.8806485s" Sep 4 17:19:23.647581 containerd[1713]: time="2024-09-04T17:19:23.647085506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:19:23.650181 containerd[1713]: time="2024-09-04T17:19:23.649882724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:19:23.661915 containerd[1713]: time="2024-09-04T17:19:23.661812280Z" level=info msg="CreateContainer within sandbox \"41d9de9cd6078097ed883ce47f8014cec428369cbd35f8d628fb73841031d426\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:19:23.715992 containerd[1713]: time="2024-09-04T17:19:23.715945345Z" level=info msg="CreateContainer within sandbox \"41d9de9cd6078097ed883ce47f8014cec428369cbd35f8d628fb73841031d426\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"90f753fdfb836d3688736700a1b5072cef4c2da416c035fd790ef45e68189e3a\"" Sep 4 17:19:23.717098 containerd[1713]: time="2024-09-04T17:19:23.716934311Z" level=info msg="StartContainer for \"90f753fdfb836d3688736700a1b5072cef4c2da416c035fd790ef45e68189e3a\"" Sep 4 17:19:23.751440 systemd[1]: Started cri-containerd-90f753fdfb836d3688736700a1b5072cef4c2da416c035fd790ef45e68189e3a.scope - libcontainer container 90f753fdfb836d3688736700a1b5072cef4c2da416c035fd790ef45e68189e3a. Sep 4 17:19:23.810418 containerd[1713]: time="2024-09-04T17:19:23.810364627Z" level=info msg="StartContainer for \"90f753fdfb836d3688736700a1b5072cef4c2da416c035fd790ef45e68189e3a\" returns successfully" Sep 4 17:19:24.723533 kubelet[3185]: E0904 17:19:24.723497 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.723533 kubelet[3185]: W0904 17:19:24.723525 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.723929 kubelet[3185]: E0904 17:19:24.723600 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.723929 kubelet[3185]: E0904 17:19:24.723822 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.723929 kubelet[3185]: W0904 17:19:24.723843 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.723929 kubelet[3185]: E0904 17:19:24.723855 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.724460 kubelet[3185]: E0904 17:19:24.724109 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.724460 kubelet[3185]: W0904 17:19:24.724125 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.724460 kubelet[3185]: E0904 17:19:24.724167 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.724767 kubelet[3185]: E0904 17:19:24.724504 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.724767 kubelet[3185]: W0904 17:19:24.724516 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.724767 kubelet[3185]: E0904 17:19:24.724637 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.724976 kubelet[3185]: E0904 17:19:24.724954 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.724976 kubelet[3185]: W0904 17:19:24.724970 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.725039 kubelet[3185]: E0904 17:19:24.724981 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.725327 kubelet[3185]: E0904 17:19:24.725219 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.725327 kubelet[3185]: W0904 17:19:24.725233 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.725327 kubelet[3185]: E0904 17:19:24.725261 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.726264 kubelet[3185]: E0904 17:19:24.725633 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.726264 kubelet[3185]: W0904 17:19:24.725649 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.726264 kubelet[3185]: E0904 17:19:24.725739 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.726264 kubelet[3185]: E0904 17:19:24.725926 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.726264 kubelet[3185]: W0904 17:19:24.725935 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.726264 kubelet[3185]: E0904 17:19:24.725964 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.726264 kubelet[3185]: E0904 17:19:24.726144 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.726264 kubelet[3185]: W0904 17:19:24.726152 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.726264 kubelet[3185]: E0904 17:19:24.726180 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.727237 kubelet[3185]: E0904 17:19:24.727058 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.727237 kubelet[3185]: W0904 17:19:24.727169 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.727237 kubelet[3185]: E0904 17:19:24.727184 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.727898 kubelet[3185]: E0904 17:19:24.727626 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.727898 kubelet[3185]: W0904 17:19:24.727639 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.727898 kubelet[3185]: E0904 17:19:24.727651 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.728444 kubelet[3185]: E0904 17:19:24.728367 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.728444 kubelet[3185]: W0904 17:19:24.728381 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.728444 kubelet[3185]: E0904 17:19:24.728393 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.730320 kubelet[3185]: E0904 17:19:24.729316 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.730320 kubelet[3185]: W0904 17:19:24.729337 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.730320 kubelet[3185]: E0904 17:19:24.729399 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.730793 kubelet[3185]: E0904 17:19:24.730560 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.730793 kubelet[3185]: W0904 17:19:24.730597 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.731382 kubelet[3185]: E0904 17:19:24.731117 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.732034 kubelet[3185]: E0904 17:19:24.731947 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.732034 kubelet[3185]: W0904 17:19:24.731960 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.732034 kubelet[3185]: E0904 17:19:24.731972 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.771940 kubelet[3185]: E0904 17:19:24.771909 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.772556 kubelet[3185]: W0904 17:19:24.772167 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.772556 kubelet[3185]: E0904 17:19:24.772197 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.772773 kubelet[3185]: E0904 17:19:24.772759 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.772945 kubelet[3185]: W0904 17:19:24.772859 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.772945 kubelet[3185]: E0904 17:19:24.772886 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.773773 kubelet[3185]: E0904 17:19:24.773597 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.773773 kubelet[3185]: W0904 17:19:24.773612 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.773773 kubelet[3185]: E0904 17:19:24.773632 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.774424 kubelet[3185]: E0904 17:19:24.774149 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.774424 kubelet[3185]: W0904 17:19:24.774162 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.774515 kubelet[3185]: E0904 17:19:24.774480 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.774563 kubelet[3185]: E0904 17:19:24.774543 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.774563 kubelet[3185]: W0904 17:19:24.774559 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.775119 kubelet[3185]: E0904 17:19:24.774688 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.775563 kubelet[3185]: E0904 17:19:24.775375 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.775563 kubelet[3185]: W0904 17:19:24.775394 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.775739 kubelet[3185]: E0904 17:19:24.775721 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.775917 kubelet[3185]: E0904 17:19:24.775890 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.775917 kubelet[3185]: W0904 17:19:24.775915 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.776107 kubelet[3185]: E0904 17:19:24.776012 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.776251 kubelet[3185]: E0904 17:19:24.776230 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.776251 kubelet[3185]: W0904 17:19:24.776248 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.776439 kubelet[3185]: E0904 17:19:24.776280 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.776439 kubelet[3185]: E0904 17:19:24.776423 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.776439 kubelet[3185]: W0904 17:19:24.776433 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.776524 kubelet[3185]: E0904 17:19:24.776451 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.776686 kubelet[3185]: E0904 17:19:24.776627 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.776686 kubelet[3185]: W0904 17:19:24.776641 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.776686 kubelet[3185]: E0904 17:19:24.776651 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.777038 kubelet[3185]: E0904 17:19:24.777018 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.777038 kubelet[3185]: W0904 17:19:24.777036 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.777161 kubelet[3185]: E0904 17:19:24.777141 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.777590 kubelet[3185]: E0904 17:19:24.777564 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.777590 kubelet[3185]: W0904 17:19:24.777587 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.777658 kubelet[3185]: E0904 17:19:24.777601 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.778443 kubelet[3185]: E0904 17:19:24.778417 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.778443 kubelet[3185]: W0904 17:19:24.778438 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.778537 kubelet[3185]: E0904 17:19:24.778451 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.779705 kubelet[3185]: E0904 17:19:24.779063 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.779705 kubelet[3185]: W0904 17:19:24.779185 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.779705 kubelet[3185]: E0904 17:19:24.779205 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.779966 kubelet[3185]: E0904 17:19:24.779895 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.779966 kubelet[3185]: W0904 17:19:24.779921 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.779966 kubelet[3185]: E0904 17:19:24.779944 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.780185 kubelet[3185]: E0904 17:19:24.780129 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.780185 kubelet[3185]: W0904 17:19:24.780142 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.780185 kubelet[3185]: E0904 17:19:24.780154 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.781388 kubelet[3185]: E0904 17:19:24.781331 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.781388 kubelet[3185]: W0904 17:19:24.781342 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.781388 kubelet[3185]: E0904 17:19:24.781354 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.782475 kubelet[3185]: E0904 17:19:24.782399 3185 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:19:24.782475 kubelet[3185]: W0904 17:19:24.782416 3185 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:19:24.782475 kubelet[3185]: E0904 17:19:24.782428 3185 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:19:24.856966 containerd[1713]: time="2024-09-04T17:19:24.856910936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:24.862682 containerd[1713]: time="2024-09-04T17:19:24.862524812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:19:24.868352 containerd[1713]: time="2024-09-04T17:19:24.868288129Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:24.874591 containerd[1713]: time="2024-09-04T17:19:24.874542169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:24.875260 containerd[1713]: time="2024-09-04T17:19:24.875094052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.225126048s" Sep 4 17:19:24.875260 containerd[1713]: time="2024-09-04T17:19:24.875129492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:19:24.877836 containerd[1713]: time="2024-09-04T17:19:24.877696069Z" level=info msg="CreateContainer within sandbox \"da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:19:24.943675 containerd[1713]: time="2024-09-04T17:19:24.943600569Z" level=info msg="CreateContainer within sandbox \"da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81\"" Sep 4 17:19:24.945348 containerd[1713]: time="2024-09-04T17:19:24.945309300Z" level=info msg="StartContainer for \"1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81\"" Sep 4 17:19:24.980300 systemd[1]: Started cri-containerd-1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81.scope - libcontainer container 1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81. Sep 4 17:19:25.016353 containerd[1713]: time="2024-09-04T17:19:25.016199831Z" level=info msg="StartContainer for \"1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81\" returns successfully" Sep 4 17:19:25.024115 systemd[1]: cri-containerd-1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81.scope: Deactivated successfully. Sep 4 17:19:25.043968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81-rootfs.mount: Deactivated successfully. Sep 4 17:19:25.581220 kubelet[3185]: E0904 17:19:25.579909 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v67mp" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" Sep 4 17:19:25.681857 kubelet[3185]: I0904 17:19:25.681795 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-747c6648cd-5lsz8" podStartSLOduration=2.798077278 podStartE2EDuration="4.681774433s" podCreationTimestamp="2024-09-04 17:19:21 +0000 UTC" firstStartedPulling="2024-09-04 17:19:21.765334963 +0000 UTC m=+24.286528522" lastFinishedPulling="2024-09-04 17:19:23.649032118 +0000 UTC m=+26.170225677" observedRunningTime="2024-09-04 17:19:24.674627535 +0000 UTC m=+27.195821094" watchObservedRunningTime="2024-09-04 17:19:25.681774433 +0000 UTC m=+28.202967992" Sep 4 17:19:26.231883 containerd[1713]: time="2024-09-04T17:19:26.231806058Z" level=info msg="shim disconnected" id=1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81 namespace=k8s.io Sep 4 17:19:26.232424 containerd[1713]: time="2024-09-04T17:19:26.231862619Z" level=warning msg="cleaning up after shim disconnected" id=1069449e662b2c77231649ddc8311683a3e3b16421ad1902ef180b9497f20a81 namespace=k8s.io Sep 4 17:19:26.232424 containerd[1713]: time="2024-09-04T17:19:26.232278421Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:26.678792 containerd[1713]: time="2024-09-04T17:19:26.675414405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:19:27.581249 kubelet[3185]: E0904 17:19:27.581198 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v67mp" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" Sep 4 17:19:29.581406 kubelet[3185]: E0904 17:19:29.581305 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v67mp" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" Sep 4 17:19:29.846606 containerd[1713]: time="2024-09-04T17:19:29.846474862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:29.849725 containerd[1713]: time="2024-09-04T17:19:29.849691432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:19:29.855457 containerd[1713]: time="2024-09-04T17:19:29.855408650Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:29.860178 containerd[1713]: time="2024-09-04T17:19:29.860117985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:29.860856 containerd[1713]: time="2024-09-04T17:19:29.860747107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.185285021s" Sep 4 17:19:29.860856 containerd[1713]: time="2024-09-04T17:19:29.860780507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:19:29.863468 containerd[1713]: time="2024-09-04T17:19:29.863311275Z" level=info msg="CreateContainer within sandbox \"da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:19:29.918258 containerd[1713]: time="2024-09-04T17:19:29.918217684Z" level=info msg="CreateContainer within sandbox \"da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91\"" Sep 4 17:19:29.919092 containerd[1713]: time="2024-09-04T17:19:29.919022767Z" level=info msg="StartContainer for \"b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91\"" Sep 4 17:19:29.947262 systemd[1]: Started cri-containerd-b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91.scope - libcontainer container b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91. Sep 4 17:19:29.972990 containerd[1713]: time="2024-09-04T17:19:29.972932013Z" level=info msg="StartContainer for \"b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91\" returns successfully" Sep 4 17:19:31.349572 containerd[1713]: time="2024-09-04T17:19:31.349521260Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:19:31.351836 systemd[1]: cri-containerd-b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91.scope: Deactivated successfully. Sep 4 17:19:31.370339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91-rootfs.mount: Deactivated successfully. Sep 4 17:19:31.386394 kubelet[3185]: I0904 17:19:31.386361 3185 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:19:31.874318 kubelet[3185]: I0904 17:19:31.403411 3185 topology_manager.go:215] "Topology Admit Handler" podUID="b2ba46b2-596d-4436-98b2-e74bc079ecc8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-h9qfk" Sep 4 17:19:31.874318 kubelet[3185]: I0904 17:19:31.410350 3185 topology_manager.go:215] "Topology Admit Handler" podUID="671faa99-9531-400b-8b3d-16fab71771a9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-hmfdp" Sep 4 17:19:31.874318 kubelet[3185]: I0904 17:19:31.413969 3185 topology_manager.go:215] "Topology Admit Handler" podUID="48717116-5ff2-47b0-bbf9-6d04482c261b" podNamespace="calico-system" podName="calico-kube-controllers-5774f58464-ztmr4" Sep 4 17:19:31.874318 kubelet[3185]: I0904 17:19:31.518275 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/671faa99-9531-400b-8b3d-16fab71771a9-config-volume\") pod \"coredns-7db6d8ff4d-hmfdp\" (UID: \"671faa99-9531-400b-8b3d-16fab71771a9\") " pod="kube-system/coredns-7db6d8ff4d-hmfdp" Sep 4 17:19:31.874318 kubelet[3185]: I0904 17:19:31.518313 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prgs5\" (UniqueName: \"kubernetes.io/projected/671faa99-9531-400b-8b3d-16fab71771a9-kube-api-access-prgs5\") pod \"coredns-7db6d8ff4d-hmfdp\" (UID: \"671faa99-9531-400b-8b3d-16fab71771a9\") " pod="kube-system/coredns-7db6d8ff4d-hmfdp" Sep 4 17:19:31.874318 kubelet[3185]: I0904 17:19:31.518335 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2ba46b2-596d-4436-98b2-e74bc079ecc8-config-volume\") pod \"coredns-7db6d8ff4d-h9qfk\" (UID: \"b2ba46b2-596d-4436-98b2-e74bc079ecc8\") " pod="kube-system/coredns-7db6d8ff4d-h9qfk" Sep 4 17:19:31.412616 systemd[1]: Created slice kubepods-burstable-podb2ba46b2_596d_4436_98b2_e74bc079ecc8.slice - libcontainer container kubepods-burstable-podb2ba46b2_596d_4436_98b2_e74bc079ecc8.slice. Sep 4 17:19:31.875460 containerd[1713]: time="2024-09-04T17:19:31.874785910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v67mp,Uid:088eb567-a576-4f83-a123-2327acb5e8ca,Namespace:calico-system,Attempt:0,}" Sep 4 17:19:31.875511 kubelet[3185]: I0904 17:19:31.518353 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vfm\" (UniqueName: \"kubernetes.io/projected/48717116-5ff2-47b0-bbf9-6d04482c261b-kube-api-access-88vfm\") pod \"calico-kube-controllers-5774f58464-ztmr4\" (UID: \"48717116-5ff2-47b0-bbf9-6d04482c261b\") " pod="calico-system/calico-kube-controllers-5774f58464-ztmr4" Sep 4 17:19:31.875511 kubelet[3185]: I0904 17:19:31.518374 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj22p\" (UniqueName: \"kubernetes.io/projected/b2ba46b2-596d-4436-98b2-e74bc079ecc8-kube-api-access-fj22p\") pod \"coredns-7db6d8ff4d-h9qfk\" (UID: \"b2ba46b2-596d-4436-98b2-e74bc079ecc8\") " pod="kube-system/coredns-7db6d8ff4d-h9qfk" Sep 4 17:19:31.875511 kubelet[3185]: I0904 17:19:31.518440 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48717116-5ff2-47b0-bbf9-6d04482c261b-tigera-ca-bundle\") pod \"calico-kube-controllers-5774f58464-ztmr4\" (UID: \"48717116-5ff2-47b0-bbf9-6d04482c261b\") " pod="calico-system/calico-kube-controllers-5774f58464-ztmr4" Sep 4 17:19:31.421034 systemd[1]: Created slice kubepods-burstable-pod671faa99_9531_400b_8b3d_16fab71771a9.slice - libcontainer container kubepods-burstable-pod671faa99_9531_400b_8b3d_16fab71771a9.slice. Sep 4 17:19:31.428226 systemd[1]: Created slice kubepods-besteffort-pod48717116_5ff2_47b0_bbf9_6d04482c261b.slice - libcontainer container kubepods-besteffort-pod48717116_5ff2_47b0_bbf9_6d04482c261b.slice. Sep 4 17:19:31.584969 systemd[1]: Created slice kubepods-besteffort-pod088eb567_a576_4f83_a123_2327acb5e8ca.slice - libcontainer container kubepods-besteffort-pod088eb567_a576_4f83_a123_2327acb5e8ca.slice. Sep 4 17:19:32.175809 containerd[1713]: time="2024-09-04T17:19:32.175702560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9qfk,Uid:b2ba46b2-596d-4436-98b2-e74bc079ecc8,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:32.180096 containerd[1713]: time="2024-09-04T17:19:32.179974971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5774f58464-ztmr4,Uid:48717116-5ff2-47b0-bbf9-6d04482c261b,Namespace:calico-system,Attempt:0,}" Sep 4 17:19:32.180308 containerd[1713]: time="2024-09-04T17:19:32.180286655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmfdp,Uid:671faa99-9531-400b-8b3d-16fab71771a9,Namespace:kube-system,Attempt:0,}" Sep 4 17:19:32.967559 containerd[1713]: time="2024-09-04T17:19:32.967497232Z" level=info msg="shim disconnected" id=b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91 namespace=k8s.io Sep 4 17:19:32.967870 containerd[1713]: time="2024-09-04T17:19:32.967569873Z" level=warning msg="cleaning up after shim disconnected" id=b937f2eeaf358a84d49a52889ae82dc81cd8a124b4e52d005a86814ecf2eee91 namespace=k8s.io Sep 4 17:19:32.967870 containerd[1713]: time="2024-09-04T17:19:32.967579553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:33.215267 containerd[1713]: time="2024-09-04T17:19:33.215126953Z" level=error msg="Failed to destroy network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.215675 containerd[1713]: time="2024-09-04T17:19:33.215647275Z" level=error msg="encountered an error cleaning up failed sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.215802 containerd[1713]: time="2024-09-04T17:19:33.215779876Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmfdp,Uid:671faa99-9531-400b-8b3d-16fab71771a9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.216422 kubelet[3185]: E0904 17:19:33.216033 3185 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.216422 kubelet[3185]: E0904 17:19:33.216114 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hmfdp" Sep 4 17:19:33.216422 kubelet[3185]: E0904 17:19:33.216136 3185 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-hmfdp" Sep 4 17:19:33.216803 kubelet[3185]: E0904 17:19:33.216174 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-hmfdp_kube-system(671faa99-9531-400b-8b3d-16fab71771a9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-hmfdp_kube-system(671faa99-9531-400b-8b3d-16fab71771a9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hmfdp" podUID="671faa99-9531-400b-8b3d-16fab71771a9" Sep 4 17:19:33.220704 containerd[1713]: time="2024-09-04T17:19:33.219993612Z" level=error msg="Failed to destroy network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.221958 containerd[1713]: time="2024-09-04T17:19:33.221354897Z" level=error msg="encountered an error cleaning up failed sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.221958 containerd[1713]: time="2024-09-04T17:19:33.221437698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v67mp,Uid:088eb567-a576-4f83-a123-2327acb5e8ca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.222102 kubelet[3185]: E0904 17:19:33.221629 3185 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.222102 kubelet[3185]: E0904 17:19:33.221675 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v67mp" Sep 4 17:19:33.222102 kubelet[3185]: E0904 17:19:33.221693 3185 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v67mp" Sep 4 17:19:33.222198 kubelet[3185]: E0904 17:19:33.221730 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v67mp_calico-system(088eb567-a576-4f83-a123-2327acb5e8ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v67mp_calico-system(088eb567-a576-4f83-a123-2327acb5e8ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v67mp" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" Sep 4 17:19:33.226801 containerd[1713]: time="2024-09-04T17:19:33.226688958Z" level=error msg="Failed to destroy network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.227212 containerd[1713]: time="2024-09-04T17:19:33.227092760Z" level=error msg="encountered an error cleaning up failed sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.227212 containerd[1713]: time="2024-09-04T17:19:33.227136080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9qfk,Uid:b2ba46b2-596d-4436-98b2-e74bc079ecc8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.227769 kubelet[3185]: E0904 17:19:33.227479 3185 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.227769 kubelet[3185]: E0904 17:19:33.227524 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h9qfk" Sep 4 17:19:33.227769 kubelet[3185]: E0904 17:19:33.227541 3185 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-h9qfk" Sep 4 17:19:33.227886 kubelet[3185]: E0904 17:19:33.227575 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-h9qfk_kube-system(b2ba46b2-596d-4436-98b2-e74bc079ecc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-h9qfk_kube-system(b2ba46b2-596d-4436-98b2-e74bc079ecc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h9qfk" podUID="b2ba46b2-596d-4436-98b2-e74bc079ecc8" Sep 4 17:19:33.234539 containerd[1713]: time="2024-09-04T17:19:33.234501709Z" level=error msg="Failed to destroy network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.234879 containerd[1713]: time="2024-09-04T17:19:33.234847870Z" level=error msg="encountered an error cleaning up failed sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.234934 containerd[1713]: time="2024-09-04T17:19:33.234909710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5774f58464-ztmr4,Uid:48717116-5ff2-47b0-bbf9-6d04482c261b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.235427 kubelet[3185]: E0904 17:19:33.235120 3185 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.235427 kubelet[3185]: E0904 17:19:33.235165 3185 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5774f58464-ztmr4" Sep 4 17:19:33.235427 kubelet[3185]: E0904 17:19:33.235184 3185 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5774f58464-ztmr4" Sep 4 17:19:33.235558 kubelet[3185]: E0904 17:19:33.235227 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5774f58464-ztmr4_calico-system(48717116-5ff2-47b0-bbf9-6d04482c261b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5774f58464-ztmr4_calico-system(48717116-5ff2-47b0-bbf9-6d04482c261b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5774f58464-ztmr4" podUID="48717116-5ff2-47b0-bbf9-6d04482c261b" Sep 4 17:19:33.688416 containerd[1713]: time="2024-09-04T17:19:33.688372190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:19:33.690707 kubelet[3185]: I0904 17:19:33.689914 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:33.690811 containerd[1713]: time="2024-09-04T17:19:33.690554798Z" level=info msg="StopPodSandbox for \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\"" Sep 4 17:19:33.690846 containerd[1713]: time="2024-09-04T17:19:33.690824639Z" level=info msg="Ensure that sandbox ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5 in task-service has been cleanup successfully" Sep 4 17:19:33.692150 kubelet[3185]: I0904 17:19:33.692126 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:33.693054 containerd[1713]: time="2024-09-04T17:19:33.693028928Z" level=info msg="StopPodSandbox for \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\"" Sep 4 17:19:33.694716 containerd[1713]: time="2024-09-04T17:19:33.694274533Z" level=info msg="Ensure that sandbox d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b in task-service has been cleanup successfully" Sep 4 17:19:33.698562 kubelet[3185]: I0904 17:19:33.698527 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:33.700314 containerd[1713]: time="2024-09-04T17:19:33.700280876Z" level=info msg="StopPodSandbox for \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\"" Sep 4 17:19:33.701943 containerd[1713]: time="2024-09-04T17:19:33.700575997Z" level=info msg="Ensure that sandbox 6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2 in task-service has been cleanup successfully" Sep 4 17:19:33.708059 kubelet[3185]: I0904 17:19:33.707718 3185 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:33.711697 containerd[1713]: time="2024-09-04T17:19:33.711642280Z" level=info msg="StopPodSandbox for \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\"" Sep 4 17:19:33.711887 containerd[1713]: time="2024-09-04T17:19:33.711859121Z" level=info msg="Ensure that sandbox fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3 in task-service has been cleanup successfully" Sep 4 17:19:33.745154 containerd[1713]: time="2024-09-04T17:19:33.745112970Z" level=error msg="StopPodSandbox for \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\" failed" error="failed to destroy network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.745560 kubelet[3185]: E0904 17:19:33.745517 3185 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:33.745632 kubelet[3185]: E0904 17:19:33.745576 3185 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5"} Sep 4 17:19:33.745660 kubelet[3185]: E0904 17:19:33.745639 3185 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b2ba46b2-596d-4436-98b2-e74bc079ecc8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:19:33.745713 kubelet[3185]: E0904 17:19:33.745665 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b2ba46b2-596d-4436-98b2-e74bc079ecc8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-h9qfk" podUID="b2ba46b2-596d-4436-98b2-e74bc079ecc8" Sep 4 17:19:33.753407 containerd[1713]: time="2024-09-04T17:19:33.753344602Z" level=error msg="StopPodSandbox for \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\" failed" error="failed to destroy network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.753567 kubelet[3185]: E0904 17:19:33.753526 3185 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:33.753624 kubelet[3185]: E0904 17:19:33.753574 3185 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b"} Sep 4 17:19:33.753624 kubelet[3185]: E0904 17:19:33.753604 3185 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"088eb567-a576-4f83-a123-2327acb5e8ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:19:33.753696 kubelet[3185]: E0904 17:19:33.753623 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"088eb567-a576-4f83-a123-2327acb5e8ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v67mp" podUID="088eb567-a576-4f83-a123-2327acb5e8ca" Sep 4 17:19:33.767183 containerd[1713]: time="2024-09-04T17:19:33.767122055Z" level=error msg="StopPodSandbox for \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\" failed" error="failed to destroy network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.767413 kubelet[3185]: E0904 17:19:33.767372 3185 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:33.767466 kubelet[3185]: E0904 17:19:33.767420 3185 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2"} Sep 4 17:19:33.767491 kubelet[3185]: E0904 17:19:33.767461 3185 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"671faa99-9531-400b-8b3d-16fab71771a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:19:33.767491 kubelet[3185]: E0904 17:19:33.767484 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"671faa99-9531-400b-8b3d-16fab71771a9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-hmfdp" podUID="671faa99-9531-400b-8b3d-16fab71771a9" Sep 4 17:19:33.771965 containerd[1713]: time="2024-09-04T17:19:33.771904634Z" level=error msg="StopPodSandbox for \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\" failed" error="failed to destroy network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:19:33.772130 kubelet[3185]: E0904 17:19:33.772105 3185 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:33.772172 kubelet[3185]: E0904 17:19:33.772142 3185 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3"} Sep 4 17:19:33.772196 kubelet[3185]: E0904 17:19:33.772166 3185 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48717116-5ff2-47b0-bbf9-6d04482c261b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:19:33.772246 kubelet[3185]: E0904 17:19:33.772197 3185 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48717116-5ff2-47b0-bbf9-6d04482c261b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5774f58464-ztmr4" podUID="48717116-5ff2-47b0-bbf9-6d04482c261b" Sep 4 17:19:34.096739 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3-shm.mount: Deactivated successfully. Sep 4 17:19:34.096833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5-shm.mount: Deactivated successfully. Sep 4 17:19:34.096883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b-shm.mount: Deactivated successfully. Sep 4 17:19:34.096938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2-shm.mount: Deactivated successfully. Sep 4 17:19:37.700843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469916840.mount: Deactivated successfully. Sep 4 17:19:37.974587 containerd[1713]: time="2024-09-04T17:19:37.974252061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:37.977953 containerd[1713]: time="2024-09-04T17:19:37.977740555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:19:37.982972 containerd[1713]: time="2024-09-04T17:19:37.982939215Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:37.987461 containerd[1713]: time="2024-09-04T17:19:37.987414992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:37.988597 containerd[1713]: time="2024-09-04T17:19:37.987975354Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 4.299553724s" Sep 4 17:19:37.988597 containerd[1713]: time="2024-09-04T17:19:37.988009195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:19:38.000187 containerd[1713]: time="2024-09-04T17:19:38.000157122Z" level=info msg="CreateContainer within sandbox \"da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:19:38.048468 containerd[1713]: time="2024-09-04T17:19:38.048423549Z" level=info msg="CreateContainer within sandbox \"da49030f13f0a46c775facff5ba3e559a654773fdf34b68315cf38523606f09f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dce7f041a66da8be2d488cac7968dbd04252ef5f24c59597afd737659de0d06d\"" Sep 4 17:19:38.049159 containerd[1713]: time="2024-09-04T17:19:38.049129672Z" level=info msg="StartContainer for \"dce7f041a66da8be2d488cac7968dbd04252ef5f24c59597afd737659de0d06d\"" Sep 4 17:19:38.080270 systemd[1]: Started cri-containerd-dce7f041a66da8be2d488cac7968dbd04252ef5f24c59597afd737659de0d06d.scope - libcontainer container dce7f041a66da8be2d488cac7968dbd04252ef5f24c59597afd737659de0d06d. Sep 4 17:19:38.107405 containerd[1713]: time="2024-09-04T17:19:38.107299137Z" level=info msg="StartContainer for \"dce7f041a66da8be2d488cac7968dbd04252ef5f24c59597afd737659de0d06d\" returns successfully" Sep 4 17:19:38.215292 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:19:38.215400 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:19:39.757894 systemd[1]: run-containerd-runc-k8s.io-dce7f041a66da8be2d488cac7968dbd04252ef5f24c59597afd737659de0d06d-runc.4iatB3.mount: Deactivated successfully. Sep 4 17:19:39.791209 kernel: bpftool[4348]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:19:40.000729 systemd-networkd[1599]: vxlan.calico: Link UP Sep 4 17:19:40.000736 systemd-networkd[1599]: vxlan.calico: Gained carrier Sep 4 17:19:41.426266 systemd-networkd[1599]: vxlan.calico: Gained IPv6LL Sep 4 17:19:46.580610 containerd[1713]: time="2024-09-04T17:19:46.580563238Z" level=info msg="StopPodSandbox for \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\"" Sep 4 17:19:46.582862 containerd[1713]: time="2024-09-04T17:19:46.581604002Z" level=info msg="StopPodSandbox for \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\"" Sep 4 17:19:46.582862 containerd[1713]: time="2024-09-04T17:19:46.581807603Z" level=info msg="StopPodSandbox for \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\"" Sep 4 17:19:46.660999 kubelet[3185]: I0904 17:19:46.660921 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mhfhl" podStartSLOduration=9.532249979 podStartE2EDuration="25.66090232s" podCreationTimestamp="2024-09-04 17:19:21 +0000 UTC" firstStartedPulling="2024-09-04 17:19:21.859979016 +0000 UTC m=+24.381172575" lastFinishedPulling="2024-09-04 17:19:37.988631357 +0000 UTC m=+40.509824916" observedRunningTime="2024-09-04 17:19:38.736097178 +0000 UTC m=+41.257290737" watchObservedRunningTime="2024-09-04 17:19:46.66090232 +0000 UTC m=+49.182095879" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.666 [INFO][4473] k8s.go 608: Cleaning up netns ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.666 [INFO][4473] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" iface="eth0" netns="/var/run/netns/cni-0ded43ce-7713-7485-5c2e-23931a0f1b1b" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.668 [INFO][4473] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" iface="eth0" netns="/var/run/netns/cni-0ded43ce-7713-7485-5c2e-23931a0f1b1b" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.668 [INFO][4473] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" iface="eth0" netns="/var/run/netns/cni-0ded43ce-7713-7485-5c2e-23931a0f1b1b" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.668 [INFO][4473] k8s.go 615: Releasing IP address(es) ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.668 [INFO][4473] utils.go 188: Calico CNI releasing IP address ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.690 [INFO][4496] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.690 [INFO][4496] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.690 [INFO][4496] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.701 [WARNING][4496] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.701 [INFO][4496] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.703 [INFO][4496] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:46.709721 containerd[1713]: 2024-09-04 17:19:46.705 [INFO][4473] k8s.go 621: Teardown processing complete. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:46.709721 containerd[1713]: time="2024-09-04T17:19:46.709579235Z" level=info msg="TearDown network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\" successfully" Sep 4 17:19:46.709721 containerd[1713]: time="2024-09-04T17:19:46.709604795Z" level=info msg="StopPodSandbox for \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\" returns successfully" Sep 4 17:19:46.712701 containerd[1713]: time="2024-09-04T17:19:46.711658683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9qfk,Uid:b2ba46b2-596d-4436-98b2-e74bc079ecc8,Namespace:kube-system,Attempt:1,}" Sep 4 17:19:46.713057 systemd[1]: run-netns-cni\x2d0ded43ce\x2d7713\x2d7485\x2d5c2e\x2d23931a0f1b1b.mount: Deactivated successfully. Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.664 [INFO][4476] k8s.go 608: Cleaning up netns ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.664 [INFO][4476] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" iface="eth0" netns="/var/run/netns/cni-7c4dcbbe-789d-dc04-46dc-251f05205afb" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.665 [INFO][4476] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" iface="eth0" netns="/var/run/netns/cni-7c4dcbbe-789d-dc04-46dc-251f05205afb" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.668 [INFO][4476] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" iface="eth0" netns="/var/run/netns/cni-7c4dcbbe-789d-dc04-46dc-251f05205afb" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.668 [INFO][4476] k8s.go 615: Releasing IP address(es) ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.668 [INFO][4476] utils.go 188: Calico CNI releasing IP address ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.716 [INFO][4494] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.716 [INFO][4494] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.716 [INFO][4494] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.729 [WARNING][4494] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.730 [INFO][4494] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.732 [INFO][4494] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:46.735555 containerd[1713]: 2024-09-04 17:19:46.733 [INFO][4476] k8s.go 621: Teardown processing complete. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:46.736725 containerd[1713]: time="2024-09-04T17:19:46.736393062Z" level=info msg="TearDown network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\" successfully" Sep 4 17:19:46.736725 containerd[1713]: time="2024-09-04T17:19:46.736684383Z" level=info msg="StopPodSandbox for \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\" returns successfully" Sep 4 17:19:46.738844 systemd[1]: run-netns-cni\x2d7c4dcbbe\x2d789d\x2ddc04\x2d46dc\x2d251f05205afb.mount: Deactivated successfully. Sep 4 17:19:46.759571 containerd[1713]: time="2024-09-04T17:19:46.738742511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5774f58464-ztmr4,Uid:48717116-5ff2-47b0-bbf9-6d04482c261b,Namespace:calico-system,Attempt:1,}" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.660 [INFO][4467] k8s.go 608: Cleaning up netns ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.660 [INFO][4467] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" iface="eth0" netns="/var/run/netns/cni-00da2e49-0411-a9c1-3f32-76e96c451bed" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.661 [INFO][4467] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" iface="eth0" netns="/var/run/netns/cni-00da2e49-0411-a9c1-3f32-76e96c451bed" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.662 [INFO][4467] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" iface="eth0" netns="/var/run/netns/cni-00da2e49-0411-a9c1-3f32-76e96c451bed" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.662 [INFO][4467] k8s.go 615: Releasing IP address(es) ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.662 [INFO][4467] utils.go 188: Calico CNI releasing IP address ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.720 [INFO][4490] ipam_plugin.go 417: Releasing address using handleID ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.720 [INFO][4490] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.732 [INFO][4490] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.745 [WARNING][4490] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.761 [INFO][4490] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.762 [INFO][4490] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:46.765976 containerd[1713]: 2024-09-04 17:19:46.764 [INFO][4467] k8s.go 621: Teardown processing complete. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:46.768311 systemd[1]: run-netns-cni\x2d00da2e49\x2d0411\x2da9c1\x2d3f32\x2d76e96c451bed.mount: Deactivated successfully. Sep 4 17:19:46.768578 containerd[1713]: time="2024-09-04T17:19:46.768342030Z" level=info msg="TearDown network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\" successfully" Sep 4 17:19:46.768578 containerd[1713]: time="2024-09-04T17:19:46.768374110Z" level=info msg="StopPodSandbox for \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\" returns successfully" Sep 4 17:19:46.769601 containerd[1713]: time="2024-09-04T17:19:46.769572875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmfdp,Uid:671faa99-9531-400b-8b3d-16fab71771a9,Namespace:kube-system,Attempt:1,}" Sep 4 17:19:46.992131 systemd-networkd[1599]: cali69d45513e1f: Link UP Sep 4 17:19:46.992445 systemd-networkd[1599]: cali69d45513e1f: Gained carrier Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.890 [INFO][4510] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0 coredns-7db6d8ff4d- kube-system b2ba46b2-596d-4436-98b2-e74bc079ecc8 729 0 2024-09-04 17:19:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-bdc284204f coredns-7db6d8ff4d-h9qfk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69d45513e1f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.890 [INFO][4510] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.942 [INFO][4520] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" HandleID="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.952 [INFO][4520] ipam_plugin.go 270: Auto assigning IP ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" HandleID="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002edaf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-bdc284204f", "pod":"coredns-7db6d8ff4d-h9qfk", "timestamp":"2024-09-04 17:19:46.942033858 +0000 UTC"}, Hostname:"ci-3975.2.1-a-bdc284204f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.952 [INFO][4520] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.953 [INFO][4520] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.953 [INFO][4520] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-bdc284204f' Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.955 [INFO][4520] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.961 [INFO][4520] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.968 [INFO][4520] ipam.go 489: Trying affinity for 192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.970 [INFO][4520] ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.972 [INFO][4520] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.972 [INFO][4520] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.973 [INFO][4520] ipam.go 1685: Creating new handle: k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202 Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.977 [INFO][4520] ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.983 [INFO][4520] ipam.go 1216: Successfully claimed IPs: [192.168.105.129/26] block=192.168.105.128/26 handle="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.983 [INFO][4520] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.129/26] handle="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.983 [INFO][4520] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:47.021223 containerd[1713]: 2024-09-04 17:19:46.983 [INFO][4520] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.105.129/26] IPv6=[] ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" HandleID="k8s-pod-network.b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:47.022996 containerd[1713]: 2024-09-04 17:19:46.985 [INFO][4510] k8s.go 386: Populated endpoint ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b2ba46b2-596d-4436-98b2-e74bc079ecc8", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"", Pod:"coredns-7db6d8ff4d-h9qfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69d45513e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:47.022996 containerd[1713]: 2024-09-04 17:19:46.986 [INFO][4510] k8s.go 387: Calico CNI using IPs: [192.168.105.129/32] ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:47.022996 containerd[1713]: 2024-09-04 17:19:46.986 [INFO][4510] dataplane_linux.go 68: Setting the host side veth name to cali69d45513e1f ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:47.022996 containerd[1713]: 2024-09-04 17:19:46.990 [INFO][4510] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:47.022996 containerd[1713]: 2024-09-04 17:19:46.990 [INFO][4510] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b2ba46b2-596d-4436-98b2-e74bc079ecc8", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202", Pod:"coredns-7db6d8ff4d-h9qfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69d45513e1f", MAC:"72:f8:14:9e:19:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:47.022996 containerd[1713]: 2024-09-04 17:19:47.008 [INFO][4510] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202" Namespace="kube-system" Pod="coredns-7db6d8ff4d-h9qfk" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:47.071398 containerd[1713]: time="2024-09-04T17:19:47.071271957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:47.071677 containerd[1713]: time="2024-09-04T17:19:47.071489638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:47.072104 containerd[1713]: time="2024-09-04T17:19:47.071900360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:47.072104 containerd[1713]: time="2024-09-04T17:19:47.071945280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:47.089130 systemd-networkd[1599]: cali3171b181fcc: Link UP Sep 4 17:19:47.089350 systemd-networkd[1599]: cali3171b181fcc: Gained carrier Sep 4 17:19:47.108240 systemd[1]: Started cri-containerd-b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202.scope - libcontainer container b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202. Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:46.943 [INFO][4523] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0 calico-kube-controllers-5774f58464- calico-system 48717116-5ff2-47b0-bbf9-6d04482c261b 728 0 2024-09-04 17:19:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5774f58464 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.1-a-bdc284204f calico-kube-controllers-5774f58464-ztmr4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3171b181fcc [] []}} ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:46.943 [INFO][4523] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.024 [INFO][4551] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" HandleID="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.041 [INFO][4551] ipam_plugin.go 270: Auto assigning IP ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" HandleID="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003645d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-bdc284204f", "pod":"calico-kube-controllers-5774f58464-ztmr4", "timestamp":"2024-09-04 17:19:47.021011901 +0000 UTC"}, Hostname:"ci-3975.2.1-a-bdc284204f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.042 [INFO][4551] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.042 [INFO][4551] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.042 [INFO][4551] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-bdc284204f' Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.045 [INFO][4551] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.051 [INFO][4551] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.061 [INFO][4551] ipam.go 489: Trying affinity for 192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.063 [INFO][4551] ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.066 [INFO][4551] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.068 [INFO][4551] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.070 [INFO][4551] ipam.go 1685: Creating new handle: k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.074 [INFO][4551] ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.079 [INFO][4551] ipam.go 1216: Successfully claimed IPs: [192.168.105.130/26] block=192.168.105.128/26 handle="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.079 [INFO][4551] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.130/26] handle="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.079 [INFO][4551] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:47.113922 containerd[1713]: 2024-09-04 17:19:47.079 [INFO][4551] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.105.130/26] IPv6=[] ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" HandleID="k8s-pod-network.942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:47.115022 containerd[1713]: 2024-09-04 17:19:47.084 [INFO][4523] k8s.go 386: Populated endpoint ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0", GenerateName:"calico-kube-controllers-5774f58464-", Namespace:"calico-system", SelfLink:"", UID:"48717116-5ff2-47b0-bbf9-6d04482c261b", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5774f58464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"", Pod:"calico-kube-controllers-5774f58464-ztmr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3171b181fcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:47.115022 containerd[1713]: 2024-09-04 17:19:47.084 [INFO][4523] k8s.go 387: Calico CNI using IPs: [192.168.105.130/32] ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:47.115022 containerd[1713]: 2024-09-04 17:19:47.085 [INFO][4523] dataplane_linux.go 68: Setting the host side veth name to cali3171b181fcc ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:47.115022 containerd[1713]: 2024-09-04 17:19:47.088 [INFO][4523] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:47.115022 containerd[1713]: 2024-09-04 17:19:47.089 [INFO][4523] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0", GenerateName:"calico-kube-controllers-5774f58464-", Namespace:"calico-system", SelfLink:"", UID:"48717116-5ff2-47b0-bbf9-6d04482c261b", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5774f58464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb", Pod:"calico-kube-controllers-5774f58464-ztmr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3171b181fcc", MAC:"02:13:e5:5c:4e:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:47.115022 containerd[1713]: 2024-09-04 17:19:47.107 [INFO][4523] k8s.go 500: Wrote updated endpoint to datastore ContainerID="942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb" Namespace="calico-system" Pod="calico-kube-controllers-5774f58464-ztmr4" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:47.148140 systemd-networkd[1599]: cali013cd539d44: Link UP Sep 4 17:19:47.150330 systemd-networkd[1599]: cali013cd539d44: Gained carrier Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:46.963 [INFO][4534] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0 coredns-7db6d8ff4d- kube-system 671faa99-9531-400b-8b3d-16fab71771a9 727 0 2024-09-04 17:19:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.1-a-bdc284204f coredns-7db6d8ff4d-hmfdp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali013cd539d44 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:46.963 [INFO][4534] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.053 [INFO][4556] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" HandleID="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.072 [INFO][4556] ipam_plugin.go 270: Auto assigning IP ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" HandleID="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c6f50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.1-a-bdc284204f", "pod":"coredns-7db6d8ff4d-hmfdp", "timestamp":"2024-09-04 17:19:47.053271025 +0000 UTC"}, Hostname:"ci-3975.2.1-a-bdc284204f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.073 [INFO][4556] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.079 [INFO][4556] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.079 [INFO][4556] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-bdc284204f' Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.081 [INFO][4556] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.084 [INFO][4556] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.097 [INFO][4556] ipam.go 489: Trying affinity for 192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.105 [INFO][4556] ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.116 [INFO][4556] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.117 [INFO][4556] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.119 [INFO][4556] ipam.go 1685: Creating new handle: k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.125 [INFO][4556] ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.136 [INFO][4556] ipam.go 1216: Successfully claimed IPs: [192.168.105.131/26] block=192.168.105.128/26 handle="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.136 [INFO][4556] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.131/26] handle="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.137 [INFO][4556] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:47.172640 containerd[1713]: 2024-09-04 17:19:47.137 [INFO][4556] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.105.131/26] IPv6=[] ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" HandleID="k8s-pod-network.45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:47.173853 containerd[1713]: 2024-09-04 17:19:47.142 [INFO][4534] k8s.go 386: Populated endpoint ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"671faa99-9531-400b-8b3d-16fab71771a9", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"", Pod:"coredns-7db6d8ff4d-hmfdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali013cd539d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:47.173853 containerd[1713]: 2024-09-04 17:19:47.143 [INFO][4534] k8s.go 387: Calico CNI using IPs: [192.168.105.131/32] ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:47.173853 containerd[1713]: 2024-09-04 17:19:47.143 [INFO][4534] dataplane_linux.go 68: Setting the host side veth name to cali013cd539d44 ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:47.173853 containerd[1713]: 2024-09-04 17:19:47.147 [INFO][4534] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:47.173853 containerd[1713]: 2024-09-04 17:19:47.148 [INFO][4534] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"671faa99-9531-400b-8b3d-16fab71771a9", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de", Pod:"coredns-7db6d8ff4d-hmfdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali013cd539d44", MAC:"72:31:6f:ec:02:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:47.173853 containerd[1713]: 2024-09-04 17:19:47.164 [INFO][4534] k8s.go 500: Wrote updated endpoint to datastore ContainerID="45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-hmfdp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:47.178979 containerd[1713]: time="2024-09-04T17:19:47.178057341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:47.178979 containerd[1713]: time="2024-09-04T17:19:47.178859025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:47.178979 containerd[1713]: time="2024-09-04T17:19:47.178946265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:47.179336 containerd[1713]: time="2024-09-04T17:19:47.179132226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:47.187447 containerd[1713]: time="2024-09-04T17:19:47.187401229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-h9qfk,Uid:b2ba46b2-596d-4436-98b2-e74bc079ecc8,Namespace:kube-system,Attempt:1,} returns sandbox id \"b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202\"" Sep 4 17:19:47.196355 containerd[1713]: time="2024-09-04T17:19:47.196321834Z" level=info msg="CreateContainer within sandbox \"b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:19:47.213328 systemd[1]: Started cri-containerd-942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb.scope - libcontainer container 942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb. Sep 4 17:19:47.243573 containerd[1713]: time="2024-09-04T17:19:47.243476874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5774f58464-ztmr4,Uid:48717116-5ff2-47b0-bbf9-6d04482c261b,Namespace:calico-system,Attempt:1,} returns sandbox id \"942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb\"" Sep 4 17:19:47.247115 containerd[1713]: time="2024-09-04T17:19:47.247060573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:19:47.264818 containerd[1713]: time="2024-09-04T17:19:47.264532982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:47.264818 containerd[1713]: time="2024-09-04T17:19:47.264597342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:47.264818 containerd[1713]: time="2024-09-04T17:19:47.264610662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:47.264818 containerd[1713]: time="2024-09-04T17:19:47.264625742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:47.280310 systemd[1]: Started cri-containerd-45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de.scope - libcontainer container 45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de. Sep 4 17:19:47.308011 containerd[1713]: time="2024-09-04T17:19:47.307933363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-hmfdp,Uid:671faa99-9531-400b-8b3d-16fab71771a9,Namespace:kube-system,Attempt:1,} returns sandbox id \"45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de\"" Sep 4 17:19:47.312006 containerd[1713]: time="2024-09-04T17:19:47.311885583Z" level=info msg="CreateContainer within sandbox \"45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:19:47.457573 containerd[1713]: time="2024-09-04T17:19:47.457494045Z" level=info msg="CreateContainer within sandbox \"b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ac38dcb72e59ae501d959447af9d45bd374097acf474d1b7246d209f07f38d8\"" Sep 4 17:19:47.458281 containerd[1713]: time="2024-09-04T17:19:47.458187249Z" level=info msg="StartContainer for \"3ac38dcb72e59ae501d959447af9d45bd374097acf474d1b7246d209f07f38d8\"" Sep 4 17:19:47.460910 containerd[1713]: time="2024-09-04T17:19:47.460768102Z" level=info msg="CreateContainer within sandbox \"45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"874bd91b41aba37d5e9d9bc18016d9249e26cac514a661e10f9b53d799bdb8f7\"" Sep 4 17:19:47.461217 containerd[1713]: time="2024-09-04T17:19:47.461131624Z" level=info msg="StartContainer for \"874bd91b41aba37d5e9d9bc18016d9249e26cac514a661e10f9b53d799bdb8f7\"" Sep 4 17:19:47.484448 systemd[1]: Started cri-containerd-3ac38dcb72e59ae501d959447af9d45bd374097acf474d1b7246d209f07f38d8.scope - libcontainer container 3ac38dcb72e59ae501d959447af9d45bd374097acf474d1b7246d209f07f38d8. Sep 4 17:19:47.487684 systemd[1]: Started cri-containerd-874bd91b41aba37d5e9d9bc18016d9249e26cac514a661e10f9b53d799bdb8f7.scope - libcontainer container 874bd91b41aba37d5e9d9bc18016d9249e26cac514a661e10f9b53d799bdb8f7. Sep 4 17:19:47.541886 containerd[1713]: time="2024-09-04T17:19:47.541710074Z" level=info msg="StartContainer for \"3ac38dcb72e59ae501d959447af9d45bd374097acf474d1b7246d209f07f38d8\" returns successfully" Sep 4 17:19:47.541886 containerd[1713]: time="2024-09-04T17:19:47.541789435Z" level=info msg="StartContainer for \"874bd91b41aba37d5e9d9bc18016d9249e26cac514a661e10f9b53d799bdb8f7\" returns successfully" Sep 4 17:19:47.751089 kubelet[3185]: I0904 17:19:47.750985 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-hmfdp" podStartSLOduration=33.750970821 podStartE2EDuration="33.750970821s" podCreationTimestamp="2024-09-04 17:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:47.750271377 +0000 UTC m=+50.271464936" watchObservedRunningTime="2024-09-04 17:19:47.750970821 +0000 UTC m=+50.272164380" Sep 4 17:19:47.785389 kubelet[3185]: I0904 17:19:47.784864 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-h9qfk" podStartSLOduration=33.784845833 podStartE2EDuration="33.784845833s" podCreationTimestamp="2024-09-04 17:19:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:19:47.767365144 +0000 UTC m=+50.288558663" watchObservedRunningTime="2024-09-04 17:19:47.784845833 +0000 UTC m=+50.306039392" Sep 4 17:19:48.402194 systemd-networkd[1599]: cali3171b181fcc: Gained IPv6LL Sep 4 17:19:48.530308 systemd-networkd[1599]: cali013cd539d44: Gained IPv6LL Sep 4 17:19:48.580485 containerd[1713]: time="2024-09-04T17:19:48.580166046Z" level=info msg="StopPodSandbox for \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\"" Sep 4 17:19:48.595492 systemd-networkd[1599]: cali69d45513e1f: Gained IPv6LL Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.668 [INFO][4822] k8s.go 608: Cleaning up netns ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.669 [INFO][4822] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" iface="eth0" netns="/var/run/netns/cni-ac6bf2e1-3695-ffcb-4c64-2d7fee69bf06" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.669 [INFO][4822] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" iface="eth0" netns="/var/run/netns/cni-ac6bf2e1-3695-ffcb-4c64-2d7fee69bf06" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.669 [INFO][4822] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" iface="eth0" netns="/var/run/netns/cni-ac6bf2e1-3695-ffcb-4c64-2d7fee69bf06" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.669 [INFO][4822] k8s.go 615: Releasing IP address(es) ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.669 [INFO][4822] utils.go 188: Calico CNI releasing IP address ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.690 [INFO][4828] ipam_plugin.go 417: Releasing address using handleID ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.690 [INFO][4828] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.691 [INFO][4828] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.700 [WARNING][4828] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.700 [INFO][4828] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.701 [INFO][4828] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:48.705711 containerd[1713]: 2024-09-04 17:19:48.702 [INFO][4822] k8s.go 621: Teardown processing complete. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:48.707757 systemd[1]: run-netns-cni\x2dac6bf2e1\x2d3695\x2dffcb\x2d4c64\x2d2d7fee69bf06.mount: Deactivated successfully. Sep 4 17:19:48.709673 containerd[1713]: time="2024-09-04T17:19:48.708228979Z" level=info msg="TearDown network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\" successfully" Sep 4 17:19:48.709673 containerd[1713]: time="2024-09-04T17:19:48.708259939Z" level=info msg="StopPodSandbox for \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\" returns successfully" Sep 4 17:19:48.713509 containerd[1713]: time="2024-09-04T17:19:48.713272045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v67mp,Uid:088eb567-a576-4f83-a123-2327acb5e8ca,Namespace:calico-system,Attempt:1,}" Sep 4 17:19:48.888273 systemd-networkd[1599]: calia544df58d4f: Link UP Sep 4 17:19:48.889897 systemd-networkd[1599]: calia544df58d4f: Gained carrier Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.800 [INFO][4838] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0 csi-node-driver- calico-system 088eb567-a576-4f83-a123-2327acb5e8ca 763 0 2024-09-04 17:19:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.1-a-bdc284204f csi-node-driver-v67mp eth0 default [] [] [kns.calico-system ksa.calico-system.default] calia544df58d4f [] []}} ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.800 [INFO][4838] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.831 [INFO][4849] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" HandleID="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.841 [INFO][4849] ipam_plugin.go 270: Auto assigning IP ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" HandleID="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebd80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.1-a-bdc284204f", "pod":"csi-node-driver-v67mp", "timestamp":"2024-09-04 17:19:48.831561688 +0000 UTC"}, Hostname:"ci-3975.2.1-a-bdc284204f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.842 [INFO][4849] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.842 [INFO][4849] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.842 [INFO][4849] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-bdc284204f' Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.843 [INFO][4849] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.849 [INFO][4849] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.859 [INFO][4849] ipam.go 489: Trying affinity for 192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.863 [INFO][4849] ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.868 [INFO][4849] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.869 [INFO][4849] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.872 [INFO][4849] ipam.go 1685: Creating new handle: k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.876 [INFO][4849] ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.883 [INFO][4849] ipam.go 1216: Successfully claimed IPs: [192.168.105.132/26] block=192.168.105.128/26 handle="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.883 [INFO][4849] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.132/26] handle="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.883 [INFO][4849] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:48.905747 containerd[1713]: 2024-09-04 17:19:48.883 [INFO][4849] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.105.132/26] IPv6=[] ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" HandleID="k8s-pod-network.18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.906348 containerd[1713]: 2024-09-04 17:19:48.885 [INFO][4838] k8s.go 386: Populated endpoint ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"088eb567-a576-4f83-a123-2327acb5e8ca", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"", Pod:"csi-node-driver-v67mp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia544df58d4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:48.906348 containerd[1713]: 2024-09-04 17:19:48.885 [INFO][4838] k8s.go 387: Calico CNI using IPs: [192.168.105.132/32] ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.906348 containerd[1713]: 2024-09-04 17:19:48.885 [INFO][4838] dataplane_linux.go 68: Setting the host side veth name to calia544df58d4f ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.906348 containerd[1713]: 2024-09-04 17:19:48.890 [INFO][4838] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.906348 containerd[1713]: 2024-09-04 17:19:48.890 [INFO][4838] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"088eb567-a576-4f83-a123-2327acb5e8ca", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f", Pod:"csi-node-driver-v67mp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia544df58d4f", MAC:"9e:36:3e:ac:e1:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:48.906348 containerd[1713]: 2024-09-04 17:19:48.899 [INFO][4838] k8s.go 500: Wrote updated endpoint to datastore ContainerID="18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f" Namespace="calico-system" Pod="csi-node-driver-v67mp" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:48.948509 containerd[1713]: time="2024-09-04T17:19:48.948139122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:19:48.948509 containerd[1713]: time="2024-09-04T17:19:48.948185802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:48.948509 containerd[1713]: time="2024-09-04T17:19:48.948198642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:19:48.948509 containerd[1713]: time="2024-09-04T17:19:48.948208642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:19:48.975249 systemd[1]: Started cri-containerd-18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f.scope - libcontainer container 18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f. Sep 4 17:19:49.009466 containerd[1713]: time="2024-09-04T17:19:49.009166353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v67mp,Uid:088eb567-a576-4f83-a123-2327acb5e8ca,Namespace:calico-system,Attempt:1,} returns sandbox id \"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f\"" Sep 4 17:19:49.258891 containerd[1713]: time="2024-09-04T17:19:49.258765785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:49.261511 containerd[1713]: time="2024-09-04T17:19:49.261443958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:19:49.265375 containerd[1713]: time="2024-09-04T17:19:49.265316458Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:49.270585 containerd[1713]: time="2024-09-04T17:19:49.270535165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:49.271441 containerd[1713]: time="2024-09-04T17:19:49.271099088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 2.023991115s" Sep 4 17:19:49.271441 containerd[1713]: time="2024-09-04T17:19:49.271135208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:19:49.273606 containerd[1713]: time="2024-09-04T17:19:49.273430059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:19:49.284251 containerd[1713]: time="2024-09-04T17:19:49.284221234Z" level=info msg="CreateContainer within sandbox \"942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:19:49.323824 containerd[1713]: time="2024-09-04T17:19:49.323727676Z" level=info msg="CreateContainer within sandbox \"942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3dc104d77bb635e8577c005c4ff6771739d4992319bf9c2413d5872015143f48\"" Sep 4 17:19:49.325112 containerd[1713]: time="2024-09-04T17:19:49.324298879Z" level=info msg="StartContainer for \"3dc104d77bb635e8577c005c4ff6771739d4992319bf9c2413d5872015143f48\"" Sep 4 17:19:49.354254 systemd[1]: Started cri-containerd-3dc104d77bb635e8577c005c4ff6771739d4992319bf9c2413d5872015143f48.scope - libcontainer container 3dc104d77bb635e8577c005c4ff6771739d4992319bf9c2413d5872015143f48. Sep 4 17:19:49.429918 containerd[1713]: time="2024-09-04T17:19:49.429870817Z" level=info msg="StartContainer for \"3dc104d77bb635e8577c005c4ff6771739d4992319bf9c2413d5872015143f48\" returns successfully" Sep 4 17:19:49.786029 kubelet[3185]: I0904 17:19:49.785962 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5774f58464-ztmr4" podStartSLOduration=26.759538824 podStartE2EDuration="28.785929791s" podCreationTimestamp="2024-09-04 17:19:21 +0000 UTC" firstStartedPulling="2024-09-04 17:19:47.246387569 +0000 UTC m=+49.767581128" lastFinishedPulling="2024-09-04 17:19:49.272778536 +0000 UTC m=+51.793972095" observedRunningTime="2024-09-04 17:19:49.784392383 +0000 UTC m=+52.305585942" watchObservedRunningTime="2024-09-04 17:19:49.785929791 +0000 UTC m=+52.307123350" Sep 4 17:19:49.938265 systemd-networkd[1599]: calia544df58d4f: Gained IPv6LL Sep 4 17:19:50.439135 containerd[1713]: time="2024-09-04T17:19:50.438609118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:50.442351 containerd[1713]: time="2024-09-04T17:19:50.442255816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:19:50.447511 containerd[1713]: time="2024-09-04T17:19:50.447458723Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:50.452994 containerd[1713]: time="2024-09-04T17:19:50.452939791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:50.454151 containerd[1713]: time="2024-09-04T17:19:50.453583914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.180121014s" Sep 4 17:19:50.454151 containerd[1713]: time="2024-09-04T17:19:50.453619274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:19:50.455768 containerd[1713]: time="2024-09-04T17:19:50.455643284Z" level=info msg="CreateContainer within sandbox \"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:19:51.177575 containerd[1713]: time="2024-09-04T17:19:51.177524448Z" level=info msg="CreateContainer within sandbox \"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"89972e5bd5fb816c64439c151e6b16b63ac403cd8f1d03eae3c646d5ef03dddd\"" Sep 4 17:19:51.178677 containerd[1713]: time="2024-09-04T17:19:51.178169811Z" level=info msg="StartContainer for \"89972e5bd5fb816c64439c151e6b16b63ac403cd8f1d03eae3c646d5ef03dddd\"" Sep 4 17:19:51.214275 systemd[1]: Started cri-containerd-89972e5bd5fb816c64439c151e6b16b63ac403cd8f1d03eae3c646d5ef03dddd.scope - libcontainer container 89972e5bd5fb816c64439c151e6b16b63ac403cd8f1d03eae3c646d5ef03dddd. Sep 4 17:19:51.241351 containerd[1713]: time="2024-09-04T17:19:51.241306627Z" level=info msg="StartContainer for \"89972e5bd5fb816c64439c151e6b16b63ac403cd8f1d03eae3c646d5ef03dddd\" returns successfully" Sep 4 17:19:51.243044 containerd[1713]: time="2024-09-04T17:19:51.243013034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:19:53.357036 containerd[1713]: time="2024-09-04T17:19:53.356983797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:53.364048 containerd[1713]: time="2024-09-04T17:19:53.363859505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:19:53.370813 containerd[1713]: time="2024-09-04T17:19:53.370238491Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:53.377048 containerd[1713]: time="2024-09-04T17:19:53.376594237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:53.378531 containerd[1713]: time="2024-09-04T17:19:53.378466924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 2.13541693s" Sep 4 17:19:53.378671 containerd[1713]: time="2024-09-04T17:19:53.378652245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:19:53.387994 containerd[1713]: time="2024-09-04T17:19:53.387951283Z" level=info msg="CreateContainer within sandbox \"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:19:53.441048 containerd[1713]: time="2024-09-04T17:19:53.440994578Z" level=info msg="CreateContainer within sandbox \"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f5f50a3f3b9622283fd558b9d83da6c3f85e55817b42031ec4bdc8b95fb34833\"" Sep 4 17:19:53.441655 containerd[1713]: time="2024-09-04T17:19:53.441531020Z" level=info msg="StartContainer for \"f5f50a3f3b9622283fd558b9d83da6c3f85e55817b42031ec4bdc8b95fb34833\"" Sep 4 17:19:53.475263 systemd[1]: Started cri-containerd-f5f50a3f3b9622283fd558b9d83da6c3f85e55817b42031ec4bdc8b95fb34833.scope - libcontainer container f5f50a3f3b9622283fd558b9d83da6c3f85e55817b42031ec4bdc8b95fb34833. Sep 4 17:19:53.509513 containerd[1713]: time="2024-09-04T17:19:53.509462255Z" level=info msg="StartContainer for \"f5f50a3f3b9622283fd558b9d83da6c3f85e55817b42031ec4bdc8b95fb34833\" returns successfully" Sep 4 17:19:53.665995 kubelet[3185]: I0904 17:19:53.665948 3185 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:19:53.665995 kubelet[3185]: I0904 17:19:53.665986 3185 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:19:56.718509 update_engine[1679]: I0904 17:19:56.718462 1679 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 4 17:19:56.718509 update_engine[1679]: I0904 17:19:56.718503 1679 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 4 17:19:56.718895 update_engine[1679]: I0904 17:19:56.718688 1679 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719508 1679 omaha_request_params.cc:62] Current group set to stable Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719620 1679 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719625 1679 update_attempter.cc:643] Scheduling an action processor start. Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719637 1679 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719664 1679 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719722 1679 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719727 1679 omaha_request_action.cc:272] Request: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: Sep 4 17:19:56.720452 update_engine[1679]: I0904 17:19:56.719730 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:19:56.721071 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 4 17:19:56.721913 update_engine[1679]: I0904 17:19:56.721881 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:19:56.722194 update_engine[1679]: I0904 17:19:56.722172 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:19:56.760203 update_engine[1679]: E0904 17:19:56.760098 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:19:56.760203 update_engine[1679]: I0904 17:19:56.760174 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 4 17:19:57.596778 containerd[1713]: time="2024-09-04T17:19:57.596694262Z" level=info msg="StopPodSandbox for \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\"" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.638 [WARNING][5070] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"088eb567-a576-4f83-a123-2327acb5e8ca", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f", Pod:"csi-node-driver-v67mp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia544df58d4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.638 [INFO][5070] k8s.go 608: Cleaning up netns ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.638 [INFO][5070] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" iface="eth0" netns="" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.638 [INFO][5070] k8s.go 615: Releasing IP address(es) ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.638 [INFO][5070] utils.go 188: Calico CNI releasing IP address ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.659 [INFO][5076] ipam_plugin.go 417: Releasing address using handleID ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.659 [INFO][5076] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.659 [INFO][5076] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.669 [WARNING][5076] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.669 [INFO][5076] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.671 [INFO][5076] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:57.676950 containerd[1713]: 2024-09-04 17:19:57.674 [INFO][5070] k8s.go 621: Teardown processing complete. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.676950 containerd[1713]: time="2024-09-04T17:19:57.676030808Z" level=info msg="TearDown network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\" successfully" Sep 4 17:19:57.676950 containerd[1713]: time="2024-09-04T17:19:57.676070649Z" level=info msg="StopPodSandbox for \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\" returns successfully" Sep 4 17:19:57.676950 containerd[1713]: time="2024-09-04T17:19:57.676764172Z" level=info msg="RemovePodSandbox for \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\"" Sep 4 17:19:57.684354 containerd[1713]: time="2024-09-04T17:19:57.676798452Z" level=info msg="Forcibly stopping sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\"" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.720 [WARNING][5094] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"088eb567-a576-4f83-a123-2327acb5e8ca", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"18500731193d92f15aca69e68d742eed07c30b915f24f62ef39597aac8846b6f", Pod:"csi-node-driver-v67mp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.105.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia544df58d4f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.720 [INFO][5094] k8s.go 608: Cleaning up netns ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.720 [INFO][5094] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" iface="eth0" netns="" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.721 [INFO][5094] k8s.go 615: Releasing IP address(es) ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.721 [INFO][5094] utils.go 188: Calico CNI releasing IP address ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.746 [INFO][5100] ipam_plugin.go 417: Releasing address using handleID ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.747 [INFO][5100] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.747 [INFO][5100] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.761 [WARNING][5100] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.762 [INFO][5100] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" HandleID="k8s-pod-network.d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Workload="ci--3975.2.1--a--bdc284204f-k8s-csi--node--driver--v67mp-eth0" Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.767 [INFO][5100] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:57.773397 containerd[1713]: 2024-09-04 17:19:57.770 [INFO][5094] k8s.go 621: Teardown processing complete. ContainerID="d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b" Sep 4 17:19:57.774687 containerd[1713]: time="2024-09-04T17:19:57.773852596Z" level=info msg="TearDown network for sandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\" successfully" Sep 4 17:19:57.785949 containerd[1713]: time="2024-09-04T17:19:57.785538367Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:57.785949 containerd[1713]: time="2024-09-04T17:19:57.785611447Z" level=info msg="RemovePodSandbox \"d09a21da217aebe5e059c7112ed6b035dc12bdb90c57a5f2d950fd8885e2c50b\" returns successfully" Sep 4 17:19:57.795120 containerd[1713]: time="2024-09-04T17:19:57.795051848Z" level=info msg="StopPodSandbox for \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\"" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.829 [WARNING][5118] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b2ba46b2-596d-4436-98b2-e74bc079ecc8", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202", Pod:"coredns-7db6d8ff4d-h9qfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69d45513e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.856 [INFO][5118] k8s.go 608: Cleaning up netns ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.856 [INFO][5118] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" iface="eth0" netns="" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.856 [INFO][5118] k8s.go 615: Releasing IP address(es) ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.856 [INFO][5118] utils.go 188: Calico CNI releasing IP address ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.877 [INFO][5125] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.877 [INFO][5125] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.877 [INFO][5125] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.884 [WARNING][5125] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.884 [INFO][5125] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.886 [INFO][5125] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:57.889104 containerd[1713]: 2024-09-04 17:19:57.887 [INFO][5118] k8s.go 621: Teardown processing complete. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.889104 containerd[1713]: time="2024-09-04T17:19:57.889038179Z" level=info msg="TearDown network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\" successfully" Sep 4 17:19:57.889104 containerd[1713]: time="2024-09-04T17:19:57.889069419Z" level=info msg="StopPodSandbox for \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\" returns successfully" Sep 4 17:19:57.889880 containerd[1713]: time="2024-09-04T17:19:57.889755782Z" level=info msg="RemovePodSandbox for \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\"" Sep 4 17:19:57.889880 containerd[1713]: time="2024-09-04T17:19:57.889782822Z" level=info msg="Forcibly stopping sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\"" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.924 [WARNING][5143] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"b2ba46b2-596d-4436-98b2-e74bc079ecc8", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"b6a6d06baee5bcd375efb147343c829a4fa4b2359619bb1d251c6099b1052202", Pod:"coredns-7db6d8ff4d-h9qfk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69d45513e1f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.925 [INFO][5143] k8s.go 608: Cleaning up netns ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.925 [INFO][5143] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" iface="eth0" netns="" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.925 [INFO][5143] k8s.go 615: Releasing IP address(es) ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.925 [INFO][5143] utils.go 188: Calico CNI releasing IP address ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.950 [INFO][5150] ipam_plugin.go 417: Releasing address using handleID ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.950 [INFO][5150] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.950 [INFO][5150] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.962 [WARNING][5150] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.962 [INFO][5150] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" HandleID="k8s-pod-network.ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--h9qfk-eth0" Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.963 [INFO][5150] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:57.966413 containerd[1713]: 2024-09-04 17:19:57.964 [INFO][5143] k8s.go 621: Teardown processing complete. ContainerID="ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5" Sep 4 17:19:57.966413 containerd[1713]: time="2024-09-04T17:19:57.966383676Z" level=info msg="TearDown network for sandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\" successfully" Sep 4 17:19:57.976815 containerd[1713]: time="2024-09-04T17:19:57.976770722Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:57.976977 containerd[1713]: time="2024-09-04T17:19:57.976848322Z" level=info msg="RemovePodSandbox \"ec1bd630761e6ec5b18d5604aee94509416ac423c77037f0c510ab77cc3236a5\" returns successfully" Sep 4 17:19:57.977658 containerd[1713]: time="2024-09-04T17:19:57.977384124Z" level=info msg="StopPodSandbox for \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\"" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.011 [WARNING][5168] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0", GenerateName:"calico-kube-controllers-5774f58464-", Namespace:"calico-system", SelfLink:"", UID:"48717116-5ff2-47b0-bbf9-6d04482c261b", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5774f58464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb", Pod:"calico-kube-controllers-5774f58464-ztmr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3171b181fcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.012 [INFO][5168] k8s.go 608: Cleaning up netns ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.012 [INFO][5168] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" iface="eth0" netns="" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.012 [INFO][5168] k8s.go 615: Releasing IP address(es) ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.012 [INFO][5168] utils.go 188: Calico CNI releasing IP address ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.035 [INFO][5174] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.035 [INFO][5174] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.035 [INFO][5174] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.043 [WARNING][5174] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.043 [INFO][5174] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.044 [INFO][5174] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:58.047858 containerd[1713]: 2024-09-04 17:19:58.046 [INFO][5168] k8s.go 621: Teardown processing complete. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.048774 containerd[1713]: time="2024-09-04T17:19:58.047898512Z" level=info msg="TearDown network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\" successfully" Sep 4 17:19:58.048774 containerd[1713]: time="2024-09-04T17:19:58.047923912Z" level=info msg="StopPodSandbox for \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\" returns successfully" Sep 4 17:19:58.048774 containerd[1713]: time="2024-09-04T17:19:58.048527715Z" level=info msg="RemovePodSandbox for \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\"" Sep 4 17:19:58.048774 containerd[1713]: time="2024-09-04T17:19:58.048554035Z" level=info msg="Forcibly stopping sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\"" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.094 [WARNING][5192] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0", GenerateName:"calico-kube-controllers-5774f58464-", Namespace:"calico-system", SelfLink:"", UID:"48717116-5ff2-47b0-bbf9-6d04482c261b", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5774f58464", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"942805dda2660031f29b31b20b939df08537a4d26163e7956deeb785504c84fb", Pod:"calico-kube-controllers-5774f58464-ztmr4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.105.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3171b181fcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.095 [INFO][5192] k8s.go 608: Cleaning up netns ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.095 [INFO][5192] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" iface="eth0" netns="" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.095 [INFO][5192] k8s.go 615: Releasing IP address(es) ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.095 [INFO][5192] utils.go 188: Calico CNI releasing IP address ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.114 [INFO][5199] ipam_plugin.go 417: Releasing address using handleID ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.114 [INFO][5199] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.114 [INFO][5199] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.122 [WARNING][5199] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.122 [INFO][5199] ipam_plugin.go 445: Releasing address using workloadID ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" HandleID="k8s-pod-network.fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--kube--controllers--5774f58464--ztmr4-eth0" Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.123 [INFO][5199] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:58.127575 containerd[1713]: 2024-09-04 17:19:58.125 [INFO][5192] k8s.go 621: Teardown processing complete. ContainerID="fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3" Sep 4 17:19:58.127973 containerd[1713]: time="2024-09-04T17:19:58.127612580Z" level=info msg="TearDown network for sandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\" successfully" Sep 4 17:19:58.144136 containerd[1713]: time="2024-09-04T17:19:58.143976452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:58.144136 containerd[1713]: time="2024-09-04T17:19:58.144067412Z" level=info msg="RemovePodSandbox \"fb5eeef5e0d19c1d8caadbd475359f851054ca8ecf891f695a97f3ef744febf3\" returns successfully" Sep 4 17:19:58.154037 containerd[1713]: time="2024-09-04T17:19:58.153734854Z" level=info msg="StopPodSandbox for \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\"" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.190 [WARNING][5217] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"671faa99-9531-400b-8b3d-16fab71771a9", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de", Pod:"coredns-7db6d8ff4d-hmfdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali013cd539d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.190 [INFO][5217] k8s.go 608: Cleaning up netns ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.190 [INFO][5217] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" iface="eth0" netns="" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.190 [INFO][5217] k8s.go 615: Releasing IP address(es) ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.190 [INFO][5217] utils.go 188: Calico CNI releasing IP address ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.209 [INFO][5223] ipam_plugin.go 417: Releasing address using handleID ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.209 [INFO][5223] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.209 [INFO][5223] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.217 [WARNING][5223] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.217 [INFO][5223] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.219 [INFO][5223] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:58.222122 containerd[1713]: 2024-09-04 17:19:58.220 [INFO][5217] k8s.go 621: Teardown processing complete. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.223051 containerd[1713]: time="2024-09-04T17:19:58.222583275Z" level=info msg="TearDown network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\" successfully" Sep 4 17:19:58.223051 containerd[1713]: time="2024-09-04T17:19:58.222626195Z" level=info msg="StopPodSandbox for \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\" returns successfully" Sep 4 17:19:58.223267 containerd[1713]: time="2024-09-04T17:19:58.223193198Z" level=info msg="RemovePodSandbox for \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\"" Sep 4 17:19:58.223482 containerd[1713]: time="2024-09-04T17:19:58.223378318Z" level=info msg="Forcibly stopping sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\"" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.264 [WARNING][5241] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"671faa99-9531-400b-8b3d-16fab71771a9", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 19, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"45d8ebc3dc49cb92a4cedc12414c457bad5e02323ce6da4aa645164a3814d1de", Pod:"coredns-7db6d8ff4d-hmfdp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.105.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali013cd539d44", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.264 [INFO][5241] k8s.go 608: Cleaning up netns ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.264 [INFO][5241] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" iface="eth0" netns="" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.264 [INFO][5241] k8s.go 615: Releasing IP address(es) ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.264 [INFO][5241] utils.go 188: Calico CNI releasing IP address ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.282 [INFO][5247] ipam_plugin.go 417: Releasing address using handleID ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.282 [INFO][5247] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.282 [INFO][5247] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.290 [WARNING][5247] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.290 [INFO][5247] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" HandleID="k8s-pod-network.6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Workload="ci--3975.2.1--a--bdc284204f-k8s-coredns--7db6d8ff4d--hmfdp-eth0" Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.291 [INFO][5247] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:19:58.294744 containerd[1713]: 2024-09-04 17:19:58.293 [INFO][5241] k8s.go 621: Teardown processing complete. ContainerID="6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2" Sep 4 17:19:58.295286 containerd[1713]: time="2024-09-04T17:19:58.294769310Z" level=info msg="TearDown network for sandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\" successfully" Sep 4 17:19:58.304908 containerd[1713]: time="2024-09-04T17:19:58.304843514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:19:58.304997 containerd[1713]: time="2024-09-04T17:19:58.304951795Z" level=info msg="RemovePodSandbox \"6610e52c60a2ba9dc1274b5ed692f09ccdada1098ef47d852e5057e0e5549df2\" returns successfully" Sep 4 17:20:06.716753 update_engine[1679]: I0904 17:20:06.716336 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:20:06.716753 update_engine[1679]: I0904 17:20:06.716519 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:20:06.716753 update_engine[1679]: I0904 17:20:06.716730 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:20:06.725181 update_engine[1679]: E0904 17:20:06.725152 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:20:06.725250 update_engine[1679]: I0904 17:20:06.725215 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 4 17:20:07.089340 kubelet[3185]: I0904 17:20:07.089158 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v67mp" podStartSLOduration=41.72552975 podStartE2EDuration="46.089140171s" podCreationTimestamp="2024-09-04 17:19:21 +0000 UTC" firstStartedPulling="2024-09-04 17:19:49.016029668 +0000 UTC m=+51.537223187" lastFinishedPulling="2024-09-04 17:19:53.379640049 +0000 UTC m=+55.900833608" observedRunningTime="2024-09-04 17:19:53.786439777 +0000 UTC m=+56.307633336" watchObservedRunningTime="2024-09-04 17:20:07.089140171 +0000 UTC m=+69.610333770" Sep 4 17:20:10.461726 kubelet[3185]: I0904 17:20:10.461471 3185 topology_manager.go:215] "Topology Admit Handler" podUID="ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a" podNamespace="calico-apiserver" podName="calico-apiserver-f8cfb55c6-5gtw5" Sep 4 17:20:10.473255 kubelet[3185]: W0904 17:20:10.472626 3185 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975.2.1-a-bdc284204f" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-bdc284204f' and this object Sep 4 17:20:10.473255 kubelet[3185]: W0904 17:20:10.472992 3185 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3975.2.1-a-bdc284204f" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-bdc284204f' and this object Sep 4 17:20:10.473683 systemd[1]: Created slice kubepods-besteffort-podca4f80ce_26d4_4c18_9ba9_00dc2ae7e02a.slice - libcontainer container kubepods-besteffort-podca4f80ce_26d4_4c18_9ba9_00dc2ae7e02a.slice. Sep 4 17:20:10.476908 kubelet[3185]: E0904 17:20:10.476868 3185 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3975.2.1-a-bdc284204f" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-bdc284204f' and this object Sep 4 17:20:10.477301 kubelet[3185]: E0904 17:20:10.477217 3185 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-3975.2.1-a-bdc284204f" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-3975.2.1-a-bdc284204f' and this object Sep 4 17:20:10.496502 kubelet[3185]: I0904 17:20:10.495109 3185 topology_manager.go:215] "Topology Admit Handler" podUID="67eb9e7e-be36-4fec-9b44-3a0a01b33cd8" podNamespace="calico-apiserver" podName="calico-apiserver-f8cfb55c6-qkcrv" Sep 4 17:20:10.505203 systemd[1]: Created slice kubepods-besteffort-pod67eb9e7e_be36_4fec_9b44_3a0a01b33cd8.slice - libcontainer container kubepods-besteffort-pod67eb9e7e_be36_4fec_9b44_3a0a01b33cd8.slice. Sep 4 17:20:10.551745 kubelet[3185]: I0904 17:20:10.551630 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lhc\" (UniqueName: \"kubernetes.io/projected/ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a-kube-api-access-k4lhc\") pod \"calico-apiserver-f8cfb55c6-5gtw5\" (UID: \"ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a\") " pod="calico-apiserver/calico-apiserver-f8cfb55c6-5gtw5" Sep 4 17:20:10.551876 kubelet[3185]: I0904 17:20:10.551793 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a-calico-apiserver-certs\") pod \"calico-apiserver-f8cfb55c6-5gtw5\" (UID: \"ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a\") " pod="calico-apiserver/calico-apiserver-f8cfb55c6-5gtw5" Sep 4 17:20:10.551876 kubelet[3185]: I0904 17:20:10.551863 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67eb9e7e-be36-4fec-9b44-3a0a01b33cd8-calico-apiserver-certs\") pod \"calico-apiserver-f8cfb55c6-qkcrv\" (UID: \"67eb9e7e-be36-4fec-9b44-3a0a01b33cd8\") " pod="calico-apiserver/calico-apiserver-f8cfb55c6-qkcrv" Sep 4 17:20:10.551936 kubelet[3185]: I0904 17:20:10.551886 3185 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58mvb\" (UniqueName: \"kubernetes.io/projected/67eb9e7e-be36-4fec-9b44-3a0a01b33cd8-kube-api-access-58mvb\") pod \"calico-apiserver-f8cfb55c6-qkcrv\" (UID: \"67eb9e7e-be36-4fec-9b44-3a0a01b33cd8\") " pod="calico-apiserver/calico-apiserver-f8cfb55c6-qkcrv" Sep 4 17:20:11.654052 kubelet[3185]: E0904 17:20:11.653962 3185 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:20:11.654052 kubelet[3185]: E0904 17:20:11.654060 3185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/67eb9e7e-be36-4fec-9b44-3a0a01b33cd8-calico-apiserver-certs podName:67eb9e7e-be36-4fec-9b44-3a0a01b33cd8 nodeName:}" failed. No retries permitted until 2024-09-04 17:20:12.1540399 +0000 UTC m=+74.675233459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/67eb9e7e-be36-4fec-9b44-3a0a01b33cd8-calico-apiserver-certs") pod "calico-apiserver-f8cfb55c6-qkcrv" (UID: "67eb9e7e-be36-4fec-9b44-3a0a01b33cd8") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:20:11.654675 kubelet[3185]: E0904 17:20:11.653964 3185 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:20:11.654675 kubelet[3185]: E0904 17:20:11.654365 3185 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a-calico-apiserver-certs podName:ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a nodeName:}" failed. No retries permitted until 2024-09-04 17:20:12.154350742 +0000 UTC m=+74.675544301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a-calico-apiserver-certs") pod "calico-apiserver-f8cfb55c6-5gtw5" (UID: "ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:20:12.280684 containerd[1713]: time="2024-09-04T17:20:12.280603040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8cfb55c6-5gtw5,Uid:ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:20:12.309218 containerd[1713]: time="2024-09-04T17:20:12.308895576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8cfb55c6-qkcrv,Uid:67eb9e7e-be36-4fec-9b44-3a0a01b33cd8,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:20:12.452730 systemd-networkd[1599]: cali6e548284ee4: Link UP Sep 4 17:20:12.455036 systemd-networkd[1599]: cali6e548284ee4: Gained carrier Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.358 [INFO][5323] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0 calico-apiserver-f8cfb55c6- calico-apiserver ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a 881 0 2024-09-04 17:20:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8cfb55c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-bdc284204f calico-apiserver-f8cfb55c6-5gtw5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6e548284ee4 [] []}} ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.358 [INFO][5323] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.394 [INFO][5344] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" HandleID="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.409 [INFO][5344] ipam_plugin.go 270: Auto assigning IP ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" HandleID="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000115e50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-bdc284204f", "pod":"calico-apiserver-f8cfb55c6-5gtw5", "timestamp":"2024-09-04 17:20:12.394125267 +0000 UTC"}, Hostname:"ci-3975.2.1-a-bdc284204f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.409 [INFO][5344] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.410 [INFO][5344] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.410 [INFO][5344] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-bdc284204f' Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.413 [INFO][5344] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.417 [INFO][5344] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.423 [INFO][5344] ipam.go 489: Trying affinity for 192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.425 [INFO][5344] ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.428 [INFO][5344] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.429 [INFO][5344] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.436 [INFO][5344] ipam.go 1685: Creating new handle: k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8 Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.439 [INFO][5344] ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.444 [INFO][5344] ipam.go 1216: Successfully claimed IPs: [192.168.105.133/26] block=192.168.105.128/26 handle="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.444 [INFO][5344] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.133/26] handle="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.444 [INFO][5344] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:12.466384 containerd[1713]: 2024-09-04 17:20:12.444 [INFO][5344] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.105.133/26] IPv6=[] ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" HandleID="k8s-pod-network.d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" Sep 4 17:20:12.467487 containerd[1713]: 2024-09-04 17:20:12.447 [INFO][5323] k8s.go 386: Populated endpoint ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0", GenerateName:"calico-apiserver-f8cfb55c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8cfb55c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"", Pod:"calico-apiserver-f8cfb55c6-5gtw5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e548284ee4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:12.467487 containerd[1713]: 2024-09-04 17:20:12.447 [INFO][5323] k8s.go 387: Calico CNI using IPs: [192.168.105.133/32] ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" Sep 4 17:20:12.467487 containerd[1713]: 2024-09-04 17:20:12.447 [INFO][5323] dataplane_linux.go 68: Setting the host side veth name to cali6e548284ee4 ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" Sep 4 17:20:12.467487 containerd[1713]: 2024-09-04 17:20:12.450 [INFO][5323] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" Sep 4 17:20:12.467487 containerd[1713]: 2024-09-04 17:20:12.450 [INFO][5323] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0", GenerateName:"calico-apiserver-f8cfb55c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8cfb55c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8", Pod:"calico-apiserver-f8cfb55c6-5gtw5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6e548284ee4", MAC:"46:b7:39:2b:77:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:12.467487 containerd[1713]: 2024-09-04 17:20:12.459 [INFO][5323] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-5gtw5" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--5gtw5-eth0" Sep 4 17:20:12.509792 systemd-networkd[1599]: cali99357c13dc9: Link UP Sep 4 17:20:12.509983 systemd-networkd[1599]: cali99357c13dc9: Gained carrier Sep 4 17:20:12.521101 containerd[1713]: time="2024-09-04T17:20:12.520585796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:12.521101 containerd[1713]: time="2024-09-04T17:20:12.520645437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:12.521101 containerd[1713]: time="2024-09-04T17:20:12.520673037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:12.521101 containerd[1713]: time="2024-09-04T17:20:12.520686877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.384 [INFO][5334] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0 calico-apiserver-f8cfb55c6- calico-apiserver 67eb9e7e-be36-4fec-9b44-3a0a01b33cd8 885 0 2024-09-04 17:20:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8cfb55c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.1-a-bdc284204f calico-apiserver-f8cfb55c6-qkcrv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali99357c13dc9 [] []}} ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.384 [INFO][5334] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.427 [INFO][5352] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" HandleID="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.440 [INFO][5352] ipam_plugin.go 270: Auto assigning IP ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" HandleID="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ebda0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.1-a-bdc284204f", "pod":"calico-apiserver-f8cfb55c6-qkcrv", "timestamp":"2024-09-04 17:20:12.427050185 +0000 UTC"}, Hostname:"ci-3975.2.1-a-bdc284204f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.440 [INFO][5352] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.445 [INFO][5352] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.445 [INFO][5352] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.1-a-bdc284204f' Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.448 [INFO][5352] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.457 [INFO][5352] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.476 [INFO][5352] ipam.go 489: Trying affinity for 192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.480 [INFO][5352] ipam.go 155: Attempting to load block cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.483 [INFO][5352] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.105.128/26 host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.483 [INFO][5352] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.105.128/26 handle="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.484 [INFO][5352] ipam.go 1685: Creating new handle: k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10 Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.488 [INFO][5352] ipam.go 1203: Writing block in order to claim IPs block=192.168.105.128/26 handle="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.498 [INFO][5352] ipam.go 1216: Successfully claimed IPs: [192.168.105.134/26] block=192.168.105.128/26 handle="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.498 [INFO][5352] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.105.134/26] handle="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" host="ci-3975.2.1-a-bdc284204f" Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.498 [INFO][5352] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:20:12.540420 containerd[1713]: 2024-09-04 17:20:12.498 [INFO][5352] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.105.134/26] IPv6=[] ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" HandleID="k8s-pod-network.4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Workload="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" Sep 4 17:20:12.542027 containerd[1713]: 2024-09-04 17:20:12.502 [INFO][5334] k8s.go 386: Populated endpoint ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0", GenerateName:"calico-apiserver-f8cfb55c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"67eb9e7e-be36-4fec-9b44-3a0a01b33cd8", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8cfb55c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"", Pod:"calico-apiserver-f8cfb55c6-qkcrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99357c13dc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:12.542027 containerd[1713]: 2024-09-04 17:20:12.502 [INFO][5334] k8s.go 387: Calico CNI using IPs: [192.168.105.134/32] ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" Sep 4 17:20:12.542027 containerd[1713]: 2024-09-04 17:20:12.502 [INFO][5334] dataplane_linux.go 68: Setting the host side veth name to cali99357c13dc9 ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" Sep 4 17:20:12.542027 containerd[1713]: 2024-09-04 17:20:12.509 [INFO][5334] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" Sep 4 17:20:12.542027 containerd[1713]: 2024-09-04 17:20:12.511 [INFO][5334] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0", GenerateName:"calico-apiserver-f8cfb55c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"67eb9e7e-be36-4fec-9b44-3a0a01b33cd8", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 20, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8cfb55c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.1-a-bdc284204f", ContainerID:"4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10", Pod:"calico-apiserver-f8cfb55c6-qkcrv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.105.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali99357c13dc9", MAC:"82:c6:38:74:ef:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:20:12.542027 containerd[1713]: 2024-09-04 17:20:12.531 [INFO][5334] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10" Namespace="calico-apiserver" Pod="calico-apiserver-f8cfb55c6-qkcrv" WorkloadEndpoint="ci--3975.2.1--a--bdc284204f-k8s-calico--apiserver--f8cfb55c6--qkcrv-eth0" Sep 4 17:20:12.554332 systemd[1]: Started cri-containerd-d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8.scope - libcontainer container d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8. Sep 4 17:20:12.577157 containerd[1713]: time="2024-09-04T17:20:12.575845863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:20:12.577157 containerd[1713]: time="2024-09-04T17:20:12.575911783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:12.577157 containerd[1713]: time="2024-09-04T17:20:12.575930663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:20:12.577157 containerd[1713]: time="2024-09-04T17:20:12.575944343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:20:12.608529 systemd[1]: Started cri-containerd-4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10.scope - libcontainer container 4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10. Sep 4 17:20:12.631327 containerd[1713]: time="2024-09-04T17:20:12.631218089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8cfb55c6-5gtw5,Uid:ca4f80ce-26d4-4c18-9ba9-00dc2ae7e02a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8\"" Sep 4 17:20:12.633273 containerd[1713]: time="2024-09-04T17:20:12.633231819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:20:12.656264 containerd[1713]: time="2024-09-04T17:20:12.656113449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8cfb55c6-qkcrv,Uid:67eb9e7e-be36-4fec-9b44-3a0a01b33cd8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10\"" Sep 4 17:20:13.618208 systemd-networkd[1599]: cali6e548284ee4: Gained IPv6LL Sep 4 17:20:14.130198 systemd-networkd[1599]: cali99357c13dc9: Gained IPv6LL Sep 4 17:20:15.163834 containerd[1713]: time="2024-09-04T17:20:15.163780425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:15.166454 containerd[1713]: time="2024-09-04T17:20:15.166411956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:20:15.171286 containerd[1713]: time="2024-09-04T17:20:15.171238376Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:15.182778 containerd[1713]: time="2024-09-04T17:20:15.182740704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:15.183522 containerd[1713]: time="2024-09-04T17:20:15.183493747Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.549644285s" Sep 4 17:20:15.183577 containerd[1713]: time="2024-09-04T17:20:15.183525308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:20:15.185101 containerd[1713]: time="2024-09-04T17:20:15.185052194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:20:15.186372 containerd[1713]: time="2024-09-04T17:20:15.186322279Z" level=info msg="CreateContainer within sandbox \"d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:20:15.230717 containerd[1713]: time="2024-09-04T17:20:15.230622064Z" level=info msg="CreateContainer within sandbox \"d8cdb4ad6107aba37a832f83b9864fe3bb72d75926e3434f6b0564964cbea8f8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9693b841b2e71f63cf152a41007fb855f2f76e71bbfecc4a599223ee6a75e8af\"" Sep 4 17:20:15.231236 containerd[1713]: time="2024-09-04T17:20:15.231208267Z" level=info msg="StartContainer for \"9693b841b2e71f63cf152a41007fb855f2f76e71bbfecc4a599223ee6a75e8af\"" Sep 4 17:20:15.262378 systemd[1]: Started cri-containerd-9693b841b2e71f63cf152a41007fb855f2f76e71bbfecc4a599223ee6a75e8af.scope - libcontainer container 9693b841b2e71f63cf152a41007fb855f2f76e71bbfecc4a599223ee6a75e8af. Sep 4 17:20:15.296297 containerd[1713]: time="2024-09-04T17:20:15.296254058Z" level=info msg="StartContainer for \"9693b841b2e71f63cf152a41007fb855f2f76e71bbfecc4a599223ee6a75e8af\" returns successfully" Sep 4 17:20:15.512948 containerd[1713]: time="2024-09-04T17:20:15.511204196Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:20:15.514536 containerd[1713]: time="2024-09-04T17:20:15.514509049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Sep 4 17:20:15.517682 containerd[1713]: time="2024-09-04T17:20:15.517655982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 332.555868ms" Sep 4 17:20:15.517801 containerd[1713]: time="2024-09-04T17:20:15.517785383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:20:15.521210 containerd[1713]: time="2024-09-04T17:20:15.521025557Z" level=info msg="CreateContainer within sandbox \"4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:20:15.565474 containerd[1713]: time="2024-09-04T17:20:15.565436142Z" level=info msg="CreateContainer within sandbox \"4b0c722ef7100a6f3131a158661f813e1e6bb44b1663fb9977c8222f029cad10\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"242f5082dd4bd31befea5f5f755558f07dce6cd0a4c16b11249cc186652ddc12\"" Sep 4 17:20:15.566361 containerd[1713]: time="2024-09-04T17:20:15.566340746Z" level=info msg="StartContainer for \"242f5082dd4bd31befea5f5f755558f07dce6cd0a4c16b11249cc186652ddc12\"" Sep 4 17:20:15.597486 systemd[1]: Started cri-containerd-242f5082dd4bd31befea5f5f755558f07dce6cd0a4c16b11249cc186652ddc12.scope - libcontainer container 242f5082dd4bd31befea5f5f755558f07dce6cd0a4c16b11249cc186652ddc12. Sep 4 17:20:15.640507 containerd[1713]: time="2024-09-04T17:20:15.640283414Z" level=info msg="StartContainer for \"242f5082dd4bd31befea5f5f755558f07dce6cd0a4c16b11249cc186652ddc12\" returns successfully" Sep 4 17:20:15.848853 kubelet[3185]: I0904 17:20:15.848711 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8cfb55c6-qkcrv" podStartSLOduration=2.987537955 podStartE2EDuration="5.848696725s" podCreationTimestamp="2024-09-04 17:20:10 +0000 UTC" firstStartedPulling="2024-09-04 17:20:12.657437976 +0000 UTC m=+75.178631535" lastFinishedPulling="2024-09-04 17:20:15.518596746 +0000 UTC m=+78.039790305" observedRunningTime="2024-09-04 17:20:15.848341643 +0000 UTC m=+78.369535202" watchObservedRunningTime="2024-09-04 17:20:15.848696725 +0000 UTC m=+78.369890324" Sep 4 17:20:16.716447 update_engine[1679]: I0904 17:20:16.716392 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:20:16.716779 update_engine[1679]: I0904 17:20:16.716626 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:20:16.716886 update_engine[1679]: I0904 17:20:16.716852 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:20:16.729644 update_engine[1679]: E0904 17:20:16.729576 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:20:16.729644 update_engine[1679]: I0904 17:20:16.729625 1679 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 4 17:20:16.864360 kubelet[3185]: I0904 17:20:16.864186 3185 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8cfb55c6-5gtw5" podStartSLOduration=4.31264179 podStartE2EDuration="6.864168644s" podCreationTimestamp="2024-09-04 17:20:10 +0000 UTC" firstStartedPulling="2024-09-04 17:20:12.632826297 +0000 UTC m=+75.154019816" lastFinishedPulling="2024-09-04 17:20:15.184353111 +0000 UTC m=+77.705546670" observedRunningTime="2024-09-04 17:20:15.864796872 +0000 UTC m=+78.385990431" watchObservedRunningTime="2024-09-04 17:20:16.864168644 +0000 UTC m=+79.385362203" Sep 4 17:20:26.725414 update_engine[1679]: I0904 17:20:26.725114 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:20:26.725414 update_engine[1679]: I0904 17:20:26.725288 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:20:26.726403 update_engine[1679]: I0904 17:20:26.725483 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:20:26.790242 update_engine[1679]: E0904 17:20:26.789600 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789659 1679 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789664 1679 omaha_request_action.cc:617] Omaha request response: Sep 4 17:20:26.790242 update_engine[1679]: E0904 17:20:26.789743 1679 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789758 1679 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789760 1679 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789764 1679 update_attempter.cc:306] Processing Done. Sep 4 17:20:26.790242 update_engine[1679]: E0904 17:20:26.789777 1679 update_attempter.cc:619] Update failed. Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789779 1679 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789783 1679 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789786 1679 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789849 1679 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789867 1679 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 4 17:20:26.790242 update_engine[1679]: I0904 17:20:26.789870 1679 omaha_request_action.cc:272] Request: Sep 4 17:20:26.790242 update_engine[1679]: Sep 4 17:20:26.790242 update_engine[1679]: Sep 4 17:20:26.790242 update_engine[1679]: Sep 4 17:20:26.790957 update_engine[1679]: Sep 4 17:20:26.790957 update_engine[1679]: Sep 4 17:20:26.790957 update_engine[1679]: Sep 4 17:20:26.790957 update_engine[1679]: I0904 17:20:26.789873 1679 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 4 17:20:26.790957 update_engine[1679]: I0904 17:20:26.789986 1679 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 4 17:20:26.790957 update_engine[1679]: I0904 17:20:26.790197 1679 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 4 17:20:26.791240 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 4 17:20:26.803065 update_engine[1679]: E0904 17:20:26.802824 1679 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 4 17:20:26.803065 update_engine[1679]: I0904 17:20:26.802881 1679 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 4 17:20:26.803065 update_engine[1679]: I0904 17:20:26.802884 1679 omaha_request_action.cc:617] Omaha request response: Sep 4 17:20:26.803065 update_engine[1679]: I0904 17:20:26.802888 1679 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:20:26.803065 update_engine[1679]: I0904 17:20:26.802891 1679 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 4 17:20:26.803065 update_engine[1679]: I0904 17:20:26.802893 1679 update_attempter.cc:306] Processing Done. Sep 4 17:20:26.803065 update_engine[1679]: I0904 17:20:26.802898 1679 update_attempter.cc:310] Error event sent. Sep 4 17:20:26.803065 update_engine[1679]: I0904 17:20:26.802906 1679 update_check_scheduler.cc:74] Next update check in 48m8s Sep 4 17:20:26.804223 locksmithd[1745]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 4 17:21:02.200196 systemd[1]: run-containerd-runc-k8s.io-3dc104d77bb635e8577c005c4ff6771739d4992319bf9c2413d5872015143f48-runc.zUjvFl.mount: Deactivated successfully. Sep 4 17:21:25.543096 systemd[1]: Started sshd@7-10.200.20.21:22-10.200.16.10:49590.service - OpenSSH per-connection server daemon (10.200.16.10:49590). Sep 4 17:21:25.996833 sshd[5742]: Accepted publickey for core from 10.200.16.10 port 49590 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:25.998848 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:26.003382 systemd-logind[1676]: New session 10 of user core. Sep 4 17:21:26.009238 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:21:26.389364 sshd[5742]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:26.392517 systemd-logind[1676]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:21:26.393500 systemd[1]: sshd@7-10.200.20.21:22-10.200.16.10:49590.service: Deactivated successfully. Sep 4 17:21:26.396260 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:21:26.397224 systemd-logind[1676]: Removed session 10. Sep 4 17:21:31.473658 systemd[1]: Started sshd@8-10.200.20.21:22-10.200.16.10:46816.service - OpenSSH per-connection server daemon (10.200.16.10:46816). Sep 4 17:21:31.919877 sshd[5756]: Accepted publickey for core from 10.200.16.10 port 46816 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:31.921499 sshd[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:31.925157 systemd-logind[1676]: New session 11 of user core. Sep 4 17:21:31.929204 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:21:32.335014 sshd[5756]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:32.337771 systemd[1]: sshd@8-10.200.20.21:22-10.200.16.10:46816.service: Deactivated successfully. Sep 4 17:21:32.339556 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:21:32.341039 systemd-logind[1676]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:21:32.342183 systemd-logind[1676]: Removed session 11. Sep 4 17:21:37.421933 systemd[1]: Started sshd@9-10.200.20.21:22-10.200.16.10:46826.service - OpenSSH per-connection server daemon (10.200.16.10:46826). Sep 4 17:21:37.899950 sshd[5818]: Accepted publickey for core from 10.200.16.10 port 46826 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:37.901315 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:37.905220 systemd-logind[1676]: New session 12 of user core. Sep 4 17:21:37.912213 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:21:38.322464 sshd[5818]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:38.326209 systemd[1]: sshd@9-10.200.20.21:22-10.200.16.10:46826.service: Deactivated successfully. Sep 4 17:21:38.327929 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:21:38.328622 systemd-logind[1676]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:21:38.329868 systemd-logind[1676]: Removed session 12. Sep 4 17:21:38.412338 systemd[1]: Started sshd@10-10.200.20.21:22-10.200.16.10:46838.service - OpenSSH per-connection server daemon (10.200.16.10:46838). Sep 4 17:21:38.894567 sshd[5832]: Accepted publickey for core from 10.200.16.10 port 46838 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:38.896287 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:38.900627 systemd-logind[1676]: New session 13 of user core. Sep 4 17:21:38.906273 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:21:39.360323 sshd[5832]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:39.363937 systemd[1]: sshd@10-10.200.20.21:22-10.200.16.10:46838.service: Deactivated successfully. Sep 4 17:21:39.366546 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:21:39.367844 systemd-logind[1676]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:21:39.369553 systemd-logind[1676]: Removed session 13. Sep 4 17:21:39.450354 systemd[1]: Started sshd@11-10.200.20.21:22-10.200.16.10:37758.service - OpenSSH per-connection server daemon (10.200.16.10:37758). Sep 4 17:21:39.924804 sshd[5843]: Accepted publickey for core from 10.200.16.10 port 37758 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:39.926197 sshd[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:39.929980 systemd-logind[1676]: New session 14 of user core. Sep 4 17:21:39.936207 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:21:40.324535 sshd[5843]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:40.328443 systemd[1]: sshd@11-10.200.20.21:22-10.200.16.10:37758.service: Deactivated successfully. Sep 4 17:21:40.331577 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:21:40.332297 systemd-logind[1676]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:21:40.333571 systemd-logind[1676]: Removed session 14. Sep 4 17:21:45.419317 systemd[1]: Started sshd@12-10.200.20.21:22-10.200.16.10:37760.service - OpenSSH per-connection server daemon (10.200.16.10:37760). Sep 4 17:21:45.893525 sshd[5868]: Accepted publickey for core from 10.200.16.10 port 37760 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:45.895198 sshd[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:45.898752 systemd-logind[1676]: New session 15 of user core. Sep 4 17:21:45.903231 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:21:46.291281 sshd[5868]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:46.294741 systemd[1]: sshd@12-10.200.20.21:22-10.200.16.10:37760.service: Deactivated successfully. Sep 4 17:21:46.296766 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:21:46.297539 systemd-logind[1676]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:21:46.298319 systemd-logind[1676]: Removed session 15. Sep 4 17:21:51.377787 systemd[1]: Started sshd@13-10.200.20.21:22-10.200.16.10:57504.service - OpenSSH per-connection server daemon (10.200.16.10:57504). Sep 4 17:21:51.853472 sshd[5880]: Accepted publickey for core from 10.200.16.10 port 57504 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:51.854581 sshd[5880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:51.858296 systemd-logind[1676]: New session 16 of user core. Sep 4 17:21:51.867234 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:21:52.263966 sshd[5880]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:52.266524 systemd[1]: sshd@13-10.200.20.21:22-10.200.16.10:57504.service: Deactivated successfully. Sep 4 17:21:52.269736 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:21:52.271008 systemd-logind[1676]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:21:52.272009 systemd-logind[1676]: Removed session 16. Sep 4 17:21:57.355730 systemd[1]: Started sshd@14-10.200.20.21:22-10.200.16.10:57512.service - OpenSSH per-connection server daemon (10.200.16.10:57512). Sep 4 17:21:57.839719 sshd[5898]: Accepted publickey for core from 10.200.16.10 port 57512 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:57.841036 sshd[5898]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:57.844478 systemd-logind[1676]: New session 17 of user core. Sep 4 17:21:57.851274 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:21:58.259266 sshd[5898]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:58.263107 systemd[1]: sshd@14-10.200.20.21:22-10.200.16.10:57512.service: Deactivated successfully. Sep 4 17:21:58.264970 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:21:58.266749 systemd-logind[1676]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:21:58.267634 systemd-logind[1676]: Removed session 17. Sep 4 17:21:58.339799 systemd[1]: Started sshd@15-10.200.20.21:22-10.200.16.10:57524.service - OpenSSH per-connection server daemon (10.200.16.10:57524). Sep 4 17:21:58.790182 sshd[5934]: Accepted publickey for core from 10.200.16.10 port 57524 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:58.791532 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:58.795842 systemd-logind[1676]: New session 18 of user core. Sep 4 17:21:58.805403 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:21:59.368473 sshd[5934]: pam_unix(sshd:session): session closed for user core Sep 4 17:21:59.371712 systemd-logind[1676]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:21:59.372663 systemd[1]: sshd@15-10.200.20.21:22-10.200.16.10:57524.service: Deactivated successfully. Sep 4 17:21:59.374705 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:21:59.376459 systemd-logind[1676]: Removed session 18. Sep 4 17:21:59.458328 systemd[1]: Started sshd@16-10.200.20.21:22-10.200.16.10:53956.service - OpenSSH per-connection server daemon (10.200.16.10:53956). Sep 4 17:21:59.934831 sshd[5945]: Accepted publickey for core from 10.200.16.10 port 53956 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:21:59.936496 sshd[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:21:59.940275 systemd-logind[1676]: New session 19 of user core. Sep 4 17:21:59.948212 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:22:01.822306 sshd[5945]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:01.826032 systemd[1]: sshd@16-10.200.20.21:22-10.200.16.10:53956.service: Deactivated successfully. Sep 4 17:22:01.826336 systemd-logind[1676]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:22:01.827962 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:22:01.829254 systemd-logind[1676]: Removed session 19. Sep 4 17:22:01.909993 systemd[1]: Started sshd@17-10.200.20.21:22-10.200.16.10:53958.service - OpenSSH per-connection server daemon (10.200.16.10:53958). Sep 4 17:22:02.353151 sshd[5965]: Accepted publickey for core from 10.200.16.10 port 53958 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:02.354660 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:02.358907 systemd-logind[1676]: New session 20 of user core. Sep 4 17:22:02.364266 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:22:02.834387 sshd[5965]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:02.836912 systemd[1]: sshd@17-10.200.20.21:22-10.200.16.10:53958.service: Deactivated successfully. Sep 4 17:22:02.838716 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:22:02.839975 systemd-logind[1676]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:22:02.841478 systemd-logind[1676]: Removed session 20. Sep 4 17:22:02.920287 systemd[1]: Started sshd@18-10.200.20.21:22-10.200.16.10:53968.service - OpenSSH per-connection server daemon (10.200.16.10:53968). Sep 4 17:22:03.364092 sshd[5994]: Accepted publickey for core from 10.200.16.10 port 53968 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:03.365386 sshd[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:03.369515 systemd-logind[1676]: New session 21 of user core. Sep 4 17:22:03.374231 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:22:03.743576 sshd[5994]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:03.746667 systemd[1]: sshd@18-10.200.20.21:22-10.200.16.10:53968.service: Deactivated successfully. Sep 4 17:22:03.748971 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:22:03.749572 systemd-logind[1676]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:22:03.750671 systemd-logind[1676]: Removed session 21. Sep 4 17:22:08.831042 systemd[1]: Started sshd@19-10.200.20.21:22-10.200.16.10:36414.service - OpenSSH per-connection server daemon (10.200.16.10:36414). Sep 4 17:22:09.314443 sshd[6037]: Accepted publickey for core from 10.200.16.10 port 36414 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:09.315742 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:09.319328 systemd-logind[1676]: New session 22 of user core. Sep 4 17:22:09.328228 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:22:09.736203 sshd[6037]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:09.739391 systemd[1]: sshd@19-10.200.20.21:22-10.200.16.10:36414.service: Deactivated successfully. Sep 4 17:22:09.740891 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:22:09.741534 systemd-logind[1676]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:22:09.742770 systemd-logind[1676]: Removed session 22. Sep 4 17:22:14.817200 systemd[1]: Started sshd@20-10.200.20.21:22-10.200.16.10:36426.service - OpenSSH per-connection server daemon (10.200.16.10:36426). Sep 4 17:22:15.264486 sshd[6050]: Accepted publickey for core from 10.200.16.10 port 36426 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:15.266209 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:15.269933 systemd-logind[1676]: New session 23 of user core. Sep 4 17:22:15.274239 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:22:15.646843 sshd[6050]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:15.650165 systemd[1]: sshd@20-10.200.20.21:22-10.200.16.10:36426.service: Deactivated successfully. Sep 4 17:22:15.652945 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:22:15.654379 systemd-logind[1676]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:22:15.655255 systemd-logind[1676]: Removed session 23. Sep 4 17:22:20.732314 systemd[1]: Started sshd@21-10.200.20.21:22-10.200.16.10:45826.service - OpenSSH per-connection server daemon (10.200.16.10:45826). Sep 4 17:22:21.187468 sshd[6076]: Accepted publickey for core from 10.200.16.10 port 45826 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:21.187918 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:21.192066 systemd-logind[1676]: New session 24 of user core. Sep 4 17:22:21.195232 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:22:21.566142 sshd[6076]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:21.569064 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:22:21.569646 systemd[1]: sshd@21-10.200.20.21:22-10.200.16.10:45826.service: Deactivated successfully. Sep 4 17:22:21.573569 systemd-logind[1676]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:22:21.574415 systemd-logind[1676]: Removed session 24. Sep 4 17:22:26.647697 systemd[1]: Started sshd@22-10.200.20.21:22-10.200.16.10:45828.service - OpenSSH per-connection server daemon (10.200.16.10:45828). Sep 4 17:22:27.099301 sshd[6089]: Accepted publickey for core from 10.200.16.10 port 45828 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:27.100542 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:27.104629 systemd-logind[1676]: New session 25 of user core. Sep 4 17:22:27.109210 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:22:27.483508 sshd[6089]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:27.485957 systemd[1]: sshd@22-10.200.20.21:22-10.200.16.10:45828.service: Deactivated successfully. Sep 4 17:22:27.487871 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:22:27.489965 systemd-logind[1676]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:22:27.490824 systemd-logind[1676]: Removed session 25. Sep 4 17:22:32.569496 systemd[1]: Started sshd@23-10.200.20.21:22-10.200.16.10:37004.service - OpenSSH per-connection server daemon (10.200.16.10:37004). Sep 4 17:22:33.019342 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 37004 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:33.020689 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:33.024847 systemd-logind[1676]: New session 26 of user core. Sep 4 17:22:33.028289 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:22:33.409902 sshd[6126]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:33.413375 systemd[1]: sshd@23-10.200.20.21:22-10.200.16.10:37004.service: Deactivated successfully. Sep 4 17:22:33.415121 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:22:33.415860 systemd-logind[1676]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:22:33.417019 systemd-logind[1676]: Removed session 26. Sep 4 17:22:38.494043 systemd[1]: Started sshd@24-10.200.20.21:22-10.200.16.10:59048.service - OpenSSH per-connection server daemon (10.200.16.10:59048). Sep 4 17:22:38.939732 sshd[6166]: Accepted publickey for core from 10.200.16.10 port 59048 ssh2: RSA SHA256:uJFoA0T1hXnyFLb0yM6vOwkxp0sbhAUAuwN9YKlfEJI Sep 4 17:22:38.941059 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:22:38.945605 systemd-logind[1676]: New session 27 of user core. Sep 4 17:22:38.951249 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:22:39.316819 sshd[6166]: pam_unix(sshd:session): session closed for user core Sep 4 17:22:39.320270 systemd-logind[1676]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:22:39.320759 systemd[1]: sshd@24-10.200.20.21:22-10.200.16.10:59048.service: Deactivated successfully. Sep 4 17:22:39.324034 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:22:39.324965 systemd-logind[1676]: Removed session 27.