Jul 2 00:23:01.315474 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:23:01.315499 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 00:23:01.315507 kernel: KASLR enabled Jul 2 00:23:01.315514 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 00:23:01.315520 kernel: printk: bootconsole [pl11] enabled Jul 2 00:23:01.315526 kernel: efi: EFI v2.7 by EDK II Jul 2 00:23:01.315533 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jul 2 00:23:01.315539 kernel: random: crng init done Jul 2 00:23:01.315545 kernel: ACPI: Early table checksum verification disabled Jul 2 00:23:01.315551 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 00:23:01.315557 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315563 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315571 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 00:23:01.315577 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315584 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315590 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315597 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315605 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315611 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315618 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 00:23:01.315624 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315631 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 00:23:01.315637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 2 00:23:01.315643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 2 00:23:01.315649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 2 00:23:01.315656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 2 00:23:01.315662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 2 00:23:01.315669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 2 00:23:01.315677 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 2 00:23:01.315683 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 2 00:23:01.315689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 2 00:23:01.315696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 2 00:23:01.315702 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 2 00:23:01.315708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 2 00:23:01.315714 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jul 2 00:23:01.315721 kernel: Zone ranges: Jul 2 00:23:01.315727 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 00:23:01.315733 kernel: DMA32 empty Jul 2 00:23:01.315739 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 00:23:01.315747 kernel: Movable zone start for each node Jul 2 00:23:01.315756 kernel: Early memory node ranges Jul 2 00:23:01.315763 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 00:23:01.315770 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 00:23:01.315777 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 00:23:01.315785 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 00:23:01.315792 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 00:23:01.315798 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 00:23:01.315805 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 00:23:01.315811 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 00:23:01.315818 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 00:23:01.315825 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 00:23:01.315831 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 00:23:01.315838 kernel: psci: probing for conduit method from ACPI. Jul 2 00:23:01.315845 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:23:01.315852 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:23:01.315858 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 00:23:01.315866 kernel: psci: SMC Calling Convention v1.4 Jul 2 00:23:01.315873 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 2 00:23:01.315880 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 2 00:23:01.315887 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 00:23:01.315893 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 00:23:01.317957 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 00:23:01.317990 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:23:01.317998 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:23:01.318005 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:23:01.318012 kernel: CPU features: detected: Spectre-BHB Jul 2 00:23:01.318019 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:23:01.318026 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:23:01.318041 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:23:01.318048 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 00:23:01.318055 kernel: alternatives: applying boot alternatives Jul 2 00:23:01.318063 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:23:01.318071 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:23:01.318078 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:23:01.318085 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:23:01.318092 kernel: Fallback order for Node 0: 0 Jul 2 00:23:01.318099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 00:23:01.318105 kernel: Policy zone: Normal Jul 2 00:23:01.318114 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:23:01.318121 kernel: software IO TLB: area num 2. Jul 2 00:23:01.318128 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jul 2 00:23:01.318135 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jul 2 00:23:01.318142 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:23:01.318149 kernel: trace event string verifier disabled Jul 2 00:23:01.318156 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:23:01.318164 kernel: rcu: RCU event tracing is enabled. Jul 2 00:23:01.318171 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:23:01.318178 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:23:01.318184 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:23:01.318191 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:23:01.318200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:23:01.318207 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:23:01.318213 kernel: GICv3: 960 SPIs implemented Jul 2 00:23:01.318220 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:23:01.318227 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:23:01.318233 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 00:23:01.318240 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 00:23:01.318247 kernel: ITS: No ITS available, not enabling LPIs Jul 2 00:23:01.318254 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:23:01.318261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:23:01.318268 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:23:01.318277 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:23:01.318284 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:23:01.318291 kernel: Console: colour dummy device 80x25 Jul 2 00:23:01.318298 kernel: printk: console [tty1] enabled Jul 2 00:23:01.318306 kernel: ACPI: Core revision 20230628 Jul 2 00:23:01.318313 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:23:01.318320 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:23:01.318327 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:23:01.318334 kernel: SELinux: Initializing. Jul 2 00:23:01.318341 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.318350 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.318357 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:01.318364 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:01.318372 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 00:23:01.318379 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 00:23:01.318386 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 00:23:01.318394 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:23:01.318408 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:23:01.318415 kernel: Remapping and enabling EFI services. Jul 2 00:23:01.318423 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:23:01.318430 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:23:01.318439 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 00:23:01.318447 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:23:01.318454 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:23:01.318462 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:23:01.318469 kernel: SMP: Total of 2 processors activated. Jul 2 00:23:01.318478 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:23:01.318486 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 00:23:01.318493 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:23:01.318501 kernel: CPU features: detected: CRC32 instructions Jul 2 00:23:01.318509 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:23:01.318516 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:23:01.318524 kernel: CPU features: detected: Privileged Access Never Jul 2 00:23:01.318531 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:23:01.318538 kernel: alternatives: applying system-wide alternatives Jul 2 00:23:01.318548 kernel: devtmpfs: initialized Jul 2 00:23:01.318555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:23:01.318562 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:23:01.318570 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:23:01.318577 kernel: SMBIOS 3.1.0 present. Jul 2 00:23:01.318585 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 00:23:01.318592 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:23:01.318599 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:23:01.318607 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:23:01.318616 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:23:01.318624 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:23:01.318631 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jul 2 00:23:01.318638 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:23:01.318646 kernel: cpuidle: using governor menu Jul 2 00:23:01.318653 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:23:01.318661 kernel: ASID allocator initialised with 32768 entries Jul 2 00:23:01.318669 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:23:01.318676 kernel: Serial: AMBA PL011 UART driver Jul 2 00:23:01.318685 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 00:23:01.318692 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 00:23:01.318700 kernel: Modules: 509120 pages in range for PLT usage Jul 2 00:23:01.318707 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:23:01.318714 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:23:01.318722 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:23:01.318729 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 00:23:01.318737 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:23:01.318744 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:23:01.318754 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:23:01.318761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 00:23:01.318768 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:23:01.318776 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:23:01.318783 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:23:01.318790 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:23:01.318798 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:23:01.318805 kernel: ACPI: Interpreter enabled Jul 2 00:23:01.318813 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:23:01.318822 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:23:01.318829 kernel: printk: console [ttyAMA0] enabled Jul 2 00:23:01.318837 kernel: printk: bootconsole [pl11] disabled Jul 2 00:23:01.318844 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 00:23:01.318852 kernel: iommu: Default domain type: Translated Jul 2 00:23:01.318860 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:23:01.318867 kernel: efivars: Registered efivars operations Jul 2 00:23:01.318874 kernel: vgaarb: loaded Jul 2 00:23:01.318882 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:23:01.318891 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:23:01.318898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:23:01.318918 kernel: pnp: PnP ACPI init Jul 2 00:23:01.318925 kernel: pnp: PnP ACPI: found 0 devices Jul 2 00:23:01.318933 kernel: NET: Registered PF_INET protocol family Jul 2 00:23:01.318940 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:23:01.318948 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:23:01.318955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:23:01.318963 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:23:01.318973 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:23:01.318980 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:23:01.318988 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.318995 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.319003 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:23:01.319010 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:23:01.319017 kernel: kvm [1]: HYP mode not available Jul 2 00:23:01.319025 kernel: Initialise system trusted keyrings Jul 2 00:23:01.319032 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:23:01.319041 kernel: Key type asymmetric registered Jul 2 00:23:01.319049 kernel: Asymmetric key parser 'x509' registered Jul 2 00:23:01.319056 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 00:23:01.319064 kernel: io scheduler mq-deadline registered Jul 2 00:23:01.319071 kernel: io scheduler kyber registered Jul 2 00:23:01.319078 kernel: io scheduler bfq registered Jul 2 00:23:01.319086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:23:01.319093 kernel: thunder_xcv, ver 1.0 Jul 2 00:23:01.319100 kernel: thunder_bgx, ver 1.0 Jul 2 00:23:01.319107 kernel: nicpf, ver 1.0 Jul 2 00:23:01.319116 kernel: nicvf, ver 1.0 Jul 2 00:23:01.319300 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:23:01.319376 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:23:00 UTC (1719879780) Jul 2 00:23:01.319387 kernel: efifb: probing for efifb Jul 2 00:23:01.319394 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 00:23:01.319402 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 00:23:01.319409 kernel: efifb: scrolling: redraw Jul 2 00:23:01.319419 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 00:23:01.319426 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:23:01.319434 kernel: fb0: EFI VGA frame buffer device Jul 2 00:23:01.319441 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 00:23:01.319449 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:23:01.319456 kernel: No ACPI PMU IRQ for CPU0 Jul 2 00:23:01.319463 kernel: No ACPI PMU IRQ for CPU1 Jul 2 00:23:01.319470 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 00:23:01.319478 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 00:23:01.319487 kernel: watchdog: Hard watchdog permanently disabled Jul 2 00:23:01.319495 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:23:01.319502 kernel: Segment Routing with IPv6 Jul 2 00:23:01.319510 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:23:01.319517 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:23:01.319525 kernel: Key type dns_resolver registered Jul 2 00:23:01.319532 kernel: registered taskstats version 1 Jul 2 00:23:01.319539 kernel: Loading compiled-in X.509 certificates Jul 2 00:23:01.319547 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 00:23:01.319556 kernel: Key type .fscrypt registered Jul 2 00:23:01.319563 kernel: Key type fscrypt-provisioning registered Jul 2 00:23:01.319571 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:23:01.319578 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:23:01.319586 kernel: ima: No architecture policies found Jul 2 00:23:01.319593 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:23:01.319601 kernel: clk: Disabling unused clocks Jul 2 00:23:01.319608 kernel: Freeing unused kernel memory: 39040K Jul 2 00:23:01.319616 kernel: Run /init as init process Jul 2 00:23:01.319625 kernel: with arguments: Jul 2 00:23:01.319632 kernel: /init Jul 2 00:23:01.319639 kernel: with environment: Jul 2 00:23:01.319647 kernel: HOME=/ Jul 2 00:23:01.319654 kernel: TERM=linux Jul 2 00:23:01.319661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:23:01.319671 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:01.319681 systemd[1]: Detected virtualization microsoft. Jul 2 00:23:01.319691 systemd[1]: Detected architecture arm64. Jul 2 00:23:01.319698 systemd[1]: Running in initrd. Jul 2 00:23:01.319706 systemd[1]: No hostname configured, using default hostname. Jul 2 00:23:01.319714 systemd[1]: Hostname set to . Jul 2 00:23:01.319722 systemd[1]: Initializing machine ID from random generator. Jul 2 00:23:01.319730 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:23:01.319738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:01.319746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:01.319756 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:23:01.319765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:01.319773 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:23:01.319781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:23:01.319791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:23:01.319799 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:23:01.319807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:01.319817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:01.319825 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:01.319833 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:01.319841 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:01.319848 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:01.319856 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:01.319864 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:01.319872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:23:01.319882 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:23:01.319890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:01.319898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:01.321970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:01.321981 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:01.321990 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:23:01.321998 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:01.322007 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:23:01.322015 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:23:01.322030 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:01.322038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:01.322079 systemd-journald[216]: Collecting audit messages is disabled. Jul 2 00:23:01.322101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:01.322113 systemd-journald[216]: Journal started Jul 2 00:23:01.322132 systemd-journald[216]: Runtime Journal (/run/log/journal/3af3a342cb5d4ca5b402e2adf0095dba) is 8.0M, max 78.6M, 70.6M free. Jul 2 00:23:01.331645 systemd-modules-load[217]: Inserted module 'overlay' Jul 2 00:23:01.348786 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:01.353061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:01.382655 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:23:01.382686 kernel: Bridge firewalling registered Jul 2 00:23:01.367688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:01.382629 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 2 00:23:01.396619 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:23:01.410259 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:01.418759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:01.436163 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:01.446122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:01.462116 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:23:01.490197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:01.499921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:01.516135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:01.528977 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:23:01.549635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:01.569423 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:23:01.582584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:01.597115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:01.623717 dracut-cmdline[251]: dracut-dracut-053 Jul 2 00:23:01.623717 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:23:01.663879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:01.668292 systemd-resolved[254]: Positive Trust Anchors: Jul 2 00:23:01.668303 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:01.668333 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:01.670564 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 2 00:23:01.673893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:01.698330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:01.804925 kernel: SCSI subsystem initialized Jul 2 00:23:01.813934 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:23:01.824934 kernel: iscsi: registered transport (tcp) Jul 2 00:23:01.842768 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:23:01.842795 kernel: QLogic iSCSI HBA Driver Jul 2 00:23:01.876129 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:01.889258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:23:01.918923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:23:01.918989 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:23:01.928409 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:23:01.977929 kernel: raid6: neonx8 gen() 15735 MB/s Jul 2 00:23:01.997913 kernel: raid6: neonx4 gen() 15689 MB/s Jul 2 00:23:02.018911 kernel: raid6: neonx2 gen() 13242 MB/s Jul 2 00:23:02.038910 kernel: raid6: neonx1 gen() 10464 MB/s Jul 2 00:23:02.058910 kernel: raid6: int64x8 gen() 6969 MB/s Jul 2 00:23:02.079910 kernel: raid6: int64x4 gen() 7347 MB/s Jul 2 00:23:02.099911 kernel: raid6: int64x2 gen() 6130 MB/s Jul 2 00:23:02.123278 kernel: raid6: int64x1 gen() 5052 MB/s Jul 2 00:23:02.123299 kernel: raid6: using algorithm neonx8 gen() 15735 MB/s Jul 2 00:23:02.148106 kernel: raid6: .... xor() 11954 MB/s, rmw enabled Jul 2 00:23:02.148130 kernel: raid6: using neon recovery algorithm Jul 2 00:23:02.159756 kernel: xor: measuring software checksum speed Jul 2 00:23:02.159770 kernel: 8regs : 19854 MB/sec Jul 2 00:23:02.163482 kernel: 32regs : 19668 MB/sec Jul 2 00:23:02.171020 kernel: arm64_neon : 27170 MB/sec Jul 2 00:23:02.171041 kernel: xor: using function: arm64_neon (27170 MB/sec) Jul 2 00:23:02.221918 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:23:02.231849 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:02.247103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:02.269709 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jul 2 00:23:02.275125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:02.294090 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:23:02.309022 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jul 2 00:23:02.338971 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:02.358232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:02.399958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:02.424147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:23:02.452554 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:02.467009 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:02.483412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:02.497588 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:02.515164 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 00:23:02.527919 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:23:02.527977 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:23:02.537074 kernel: PTP clock support registered Jul 2 00:23:02.537125 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 00:23:02.550162 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:23:02.573771 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 00:23:02.573795 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 00:23:02.573806 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 00:23:02.582912 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 00:23:02.575054 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:02.603877 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 00:23:02.575203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:02.628438 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 00:23:02.628461 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 00:23:02.603959 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:02.874111 kernel: hv_vmbus: registering driver hv_utils Jul 2 00:23:02.874139 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 00:23:02.874149 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 00:23:02.874159 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 00:23:02.874168 kernel: scsi host0: storvsc_host_t Jul 2 00:23:02.874354 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 00:23:02.874378 kernel: scsi host1: storvsc_host_t Jul 2 00:23:02.874466 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 00:23:02.614939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:02.615168 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:02.846522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:02.848591 systemd-resolved[254]: Clock change detected. Flushing caches. Jul 2 00:23:02.890620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:02.951798 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 00:23:02.953411 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:23:02.953430 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: VF slot 1 added Jul 2 00:23:02.953647 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 00:23:02.907351 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:02.919543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:02.972251 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:03.011906 kernel: hv_vmbus: registering driver hv_pci Jul 2 00:23:03.011940 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 00:23:03.076408 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 00:23:03.076531 kernel: hv_pci d3fb7eb2-5c68-483d-b558-57b0d95abe6f: PCI VMBus probing: Using version 0x10004 Jul 2 00:23:03.118294 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 00:23:03.118448 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 00:23:03.118787 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 00:23:03.118911 kernel: hv_pci d3fb7eb2-5c68-483d-b558-57b0d95abe6f: PCI host bridge to bus 5c68:00 Jul 2 00:23:03.119017 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:03.119028 kernel: pci_bus 5c68:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 00:23:03.119129 kernel: pci_bus 5c68:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 00:23:03.119208 kernel: pci 5c68:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 00:23:03.119309 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 00:23:03.119403 kernel: pci 5c68:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 00:23:03.119507 kernel: pci 5c68:00:02.0: enabling Extended Tags Jul 2 00:23:03.119636 kernel: pci 5c68:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5c68:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 00:23:03.119727 kernel: pci_bus 5c68:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 00:23:03.119807 kernel: pci 5c68:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 00:23:03.037804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:03.167595 kernel: mlx5_core 5c68:00:02.0: enabling device (0000 -> 0002) Jul 2 00:23:03.394809 kernel: mlx5_core 5c68:00:02.0: firmware version: 16.30.1284 Jul 2 00:23:03.394971 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: VF registering: eth1 Jul 2 00:23:03.395281 kernel: mlx5_core 5c68:00:02.0 eth1: joined to eth0 Jul 2 00:23:03.395381 kernel: mlx5_core 5c68:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 2 00:23:03.403595 kernel: mlx5_core 5c68:00:02.0 enP23656s1: renamed from eth1 Jul 2 00:23:03.460814 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 00:23:03.549687 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (500) Jul 2 00:23:03.564379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:23:03.597083 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (481) Jul 2 00:23:03.591613 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 00:23:03.603294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 00:23:03.611811 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 00:23:03.640871 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:23:03.668575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:03.674578 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:04.685580 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:04.687089 disk-uuid[598]: The operation has completed successfully. Jul 2 00:23:04.754908 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:23:04.755021 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:23:04.781728 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:23:04.794343 sh[711]: Success Jul 2 00:23:04.824588 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:23:04.999016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:23:05.007532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:23:05.021762 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:23:05.054131 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 2 00:23:05.054185 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:05.064131 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:23:05.070048 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:23:05.074463 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:23:05.544879 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:23:05.550672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:23:05.570900 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:23:05.578777 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:23:05.613217 kernel: BTRFS info (device sda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:05.613271 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:05.617696 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:23:05.648960 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:23:05.665229 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:23:05.670753 kernel: BTRFS info (device sda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:05.678987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:23:05.695823 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:23:05.716666 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:05.737702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:05.764775 systemd-networkd[896]: lo: Link UP Jul 2 00:23:05.764786 systemd-networkd[896]: lo: Gained carrier Jul 2 00:23:05.766681 systemd-networkd[896]: Enumeration completed Jul 2 00:23:05.769208 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:05.769750 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:05.769753 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:05.779533 systemd[1]: Reached target network.target - Network. Jul 2 00:23:05.866579 kernel: mlx5_core 5c68:00:02.0 enP23656s1: Link up Jul 2 00:23:05.907582 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: Data path switched to VF: enP23656s1 Jul 2 00:23:05.908295 systemd-networkd[896]: enP23656s1: Link UP Jul 2 00:23:05.908394 systemd-networkd[896]: eth0: Link UP Jul 2 00:23:05.908488 systemd-networkd[896]: eth0: Gained carrier Jul 2 00:23:05.908497 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:05.921162 systemd-networkd[896]: enP23656s1: Gained carrier Jul 2 00:23:05.941683 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 00:23:06.481945 ignition[885]: Ignition 2.18.0 Jul 2 00:23:06.481957 ignition[885]: Stage: fetch-offline Jul 2 00:23:06.486851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:06.481997 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.482005 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.482105 ignition[885]: parsed url from cmdline: "" Jul 2 00:23:06.510876 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:23:06.482110 ignition[885]: no config URL provided Jul 2 00:23:06.482115 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:06.482122 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:06.482127 ignition[885]: failed to fetch config: resource requires networking Jul 2 00:23:06.482324 ignition[885]: Ignition finished successfully Jul 2 00:23:06.535017 ignition[906]: Ignition 2.18.0 Jul 2 00:23:06.535024 ignition[906]: Stage: fetch Jul 2 00:23:06.535218 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.535228 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.535343 ignition[906]: parsed url from cmdline: "" Jul 2 00:23:06.535349 ignition[906]: no config URL provided Jul 2 00:23:06.535354 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:06.535361 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:06.535395 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 00:23:06.620811 ignition[906]: GET result: OK Jul 2 00:23:06.620851 ignition[906]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Jul 2 00:23:06.641355 ignition[906]: opening config device: "/dev/sr0" Jul 2 00:23:06.642009 ignition[906]: getting drive status for "/dev/sr0" Jul 2 00:23:06.642068 ignition[906]: drive status: OK Jul 2 00:23:06.642105 ignition[906]: mounting config device Jul 2 00:23:06.642127 ignition[906]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.662184 ignition[906]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.669320 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/07/03 00:00 (1000) Jul 2 00:23:06.662194 ignition[906]: checking for config drive Jul 2 00:23:06.669118 ignition[906]: reading config Jul 2 00:23:06.669472 ignition[906]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.669641 ignition[906]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.669911 systemd[1]: tmp-ignition\x2dazure3486351601.mount: Deactivated successfully. Jul 2 00:23:06.669657 ignition[906]: config has been read from custom data Jul 2 00:23:06.674317 unknown[906]: fetched base config from "system" Jul 2 00:23:06.670288 ignition[906]: parsing config with SHA512: 6dd2f975b445b3ee2c4689e52d5aea775e4f0e70c2b73296e7223dcc39edc83bfc8c0021aeb901a601d24796b6bbb301bab94079b49b4062b0d37a958bcf7184 Jul 2 00:23:06.674324 unknown[906]: fetched base config from "system" Jul 2 00:23:06.674704 ignition[906]: fetch: fetch complete Jul 2 00:23:06.674329 unknown[906]: fetched user config from "azure" Jul 2 00:23:06.674709 ignition[906]: fetch: fetch passed Jul 2 00:23:06.678607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:23:06.674748 ignition[906]: Ignition finished successfully Jul 2 00:23:06.705830 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:23:06.724385 ignition[914]: Ignition 2.18.0 Jul 2 00:23:06.730066 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:23:06.724407 ignition[914]: Stage: kargs Jul 2 00:23:06.749047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:23:06.724725 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.724763 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.727040 ignition[914]: kargs: kargs passed Jul 2 00:23:06.727128 ignition[914]: Ignition finished successfully Jul 2 00:23:06.793914 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:23:06.782062 ignition[921]: Ignition 2.18.0 Jul 2 00:23:06.804715 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:06.782069 ignition[921]: Stage: disks Jul 2 00:23:06.816574 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:23:06.782386 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.827613 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:06.782402 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.839843 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:06.789426 ignition[921]: disks: disks passed Jul 2 00:23:06.849867 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:06.789527 ignition[921]: Ignition finished successfully Jul 2 00:23:06.882833 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:23:07.013960 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 00:23:07.029685 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:23:07.047839 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:23:07.105622 kernel: EXT4-fs (sda9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 2 00:23:07.106682 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:23:07.111610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:07.151633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:07.158696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:23:07.179080 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:23:07.199624 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Jul 2 00:23:07.199649 kernel: BTRFS info (device sda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:07.199660 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:07.199670 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:23:07.200571 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:23:07.213604 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:23:07.213660 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:07.238773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:07.243914 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:23:07.257779 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:23:07.332670 systemd-networkd[896]: eth0: Gained IPv6LL Jul 2 00:23:07.333019 systemd-networkd[896]: enP23656s1: Gained IPv6LL Jul 2 00:23:08.012921 coreos-metadata[944]: Jul 02 00:23:08.012 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:23:08.022432 coreos-metadata[944]: Jul 02 00:23:08.016 INFO Fetch successful Jul 2 00:23:08.022432 coreos-metadata[944]: Jul 02 00:23:08.016 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:23:08.042184 coreos-metadata[944]: Jul 02 00:23:08.028 INFO Fetch successful Jul 2 00:23:08.042184 coreos-metadata[944]: Jul 02 00:23:08.041 INFO wrote hostname ci-3975.1.1-a-3e8d94ffa6 to /sysroot/etc/hostname Jul 2 00:23:08.042799 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:23:08.316955 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:23:08.360897 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:23:08.368078 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:23:08.385258 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:23:09.286483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:09.299971 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:23:09.311462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:23:09.332829 kernel: BTRFS info (device sda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:09.326887 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:23:09.356241 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:23:09.367757 ignition[1061]: INFO : Ignition 2.18.0 Jul 2 00:23:09.367757 ignition[1061]: INFO : Stage: mount Jul 2 00:23:09.367757 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:09.367757 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:09.395503 ignition[1061]: INFO : mount: mount passed Jul 2 00:23:09.395503 ignition[1061]: INFO : Ignition finished successfully Jul 2 00:23:09.375871 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:23:09.399757 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:23:09.420848 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:09.452199 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Jul 2 00:23:09.452265 kernel: BTRFS info (device sda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:09.458564 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:09.462586 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:23:09.469593 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:23:09.470906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:09.494817 ignition[1090]: INFO : Ignition 2.18.0 Jul 2 00:23:09.494817 ignition[1090]: INFO : Stage: files Jul 2 00:23:09.502799 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:09.502799 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:09.502799 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:23:09.567928 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:23:09.567928 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:23:09.649939 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:23:09.657577 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:23:09.657577 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:23:09.650337 unknown[1090]: wrote ssh authorized keys file for user: core Jul 2 00:23:09.729562 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:23:09.740159 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:23:10.157720 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:23:10.379566 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 00:23:10.796030 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:23:11.001589 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:11.001589 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: files passed Jul 2 00:23:11.022662 ignition[1090]: INFO : Ignition finished successfully Jul 2 00:23:11.023050 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:23:11.062384 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:23:11.078792 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:23:11.108079 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:23:11.175185 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:11.175185 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:11.108196 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:23:11.207857 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:11.149687 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:11.157479 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:23:11.184856 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:23:11.233151 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:23:11.233301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:23:11.244480 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:23:11.255524 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:23:11.268084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:23:11.270787 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:23:11.303408 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:11.313833 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:23:11.336824 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:11.349982 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:11.356813 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:23:11.368024 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:23:11.368204 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:11.384398 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:23:11.396436 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:23:11.406637 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:23:11.417173 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:11.429344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:11.441884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:23:11.453631 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:11.466391 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:23:11.478631 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:23:11.489465 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:23:11.498932 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:23:11.499112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:11.514817 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:11.526046 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:11.538294 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:23:11.538400 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:11.551388 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:23:11.551574 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:11.569540 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:23:11.569735 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:11.581757 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:23:11.581910 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:23:11.592400 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:23:11.592549 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:23:11.624710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:23:11.641083 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:23:11.661029 ignition[1142]: INFO : Ignition 2.18.0 Jul 2 00:23:11.661029 ignition[1142]: INFO : Stage: umount Jul 2 00:23:11.661029 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:11.661029 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:11.661029 ignition[1142]: INFO : umount: umount passed Jul 2 00:23:11.661029 ignition[1142]: INFO : Ignition finished successfully Jul 2 00:23:11.641347 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:11.665869 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:23:11.678440 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:23:11.678632 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:11.691217 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:23:11.691342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:11.706049 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:23:11.706139 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:23:11.718566 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:23:11.718847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:23:11.730146 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:23:11.730208 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:23:11.736341 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:23:11.736388 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:23:11.742324 systemd[1]: Stopped target network.target - Network. Jul 2 00:23:11.757506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:23:11.757611 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:11.770250 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:23:11.775347 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:23:11.780567 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:11.787985 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:23:11.798807 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:23:11.809412 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:23:11.809469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:11.822730 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:23:11.822793 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:11.833847 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:23:11.833915 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:23:11.844306 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:23:11.844359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:11.855074 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:23:11.865332 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:23:11.880498 systemd-networkd[896]: eth0: DHCPv6 lease lost Jul 2 00:23:11.882603 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:23:12.100375 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: Data path switched from VF: enP23656s1 Jul 2 00:23:11.883274 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:23:11.883357 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:23:11.899099 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:23:11.899205 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:23:11.911081 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:23:11.912599 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:23:11.924128 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:23:11.924188 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:11.952806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:23:11.958156 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:23:11.958237 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:11.965532 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:23:11.965608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:11.975929 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:23:11.975981 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:11.986500 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:23:11.986570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:11.999055 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:12.049110 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:23:12.049296 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:12.062884 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:23:12.062927 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:12.084972 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:23:12.085018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:12.095744 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:23:12.095803 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:12.112048 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:23:12.112102 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:12.135048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:12.135117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:12.171813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:23:12.185719 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:23:12.185785 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:12.201028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:12.201089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:12.212742 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:23:12.212835 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:23:12.223521 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:23:12.223614 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:23:13.020506 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:23:13.020664 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:23:13.031284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:23:13.041336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:23:13.041409 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:13.064820 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:23:13.077964 systemd[1]: Switching root. Jul 2 00:23:13.140682 systemd-journald[216]: Journal stopped Jul 2 00:23:01.315474 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 2 00:23:01.315499 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Jul 1 22:48:46 -00 2024 Jul 2 00:23:01.315507 kernel: KASLR enabled Jul 2 00:23:01.315514 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jul 2 00:23:01.315520 kernel: printk: bootconsole [pl11] enabled Jul 2 00:23:01.315526 kernel: efi: EFI v2.7 by EDK II Jul 2 00:23:01.315533 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Jul 2 00:23:01.315539 kernel: random: crng init done Jul 2 00:23:01.315545 kernel: ACPI: Early table checksum verification disabled Jul 2 00:23:01.315551 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Jul 2 00:23:01.315557 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315563 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315571 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 00:23:01.315577 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315584 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315590 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315597 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315605 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315611 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315618 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jul 2 00:23:01.315624 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 00:23:01.315631 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jul 2 00:23:01.315637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jul 2 00:23:01.315643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jul 2 00:23:01.315649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jul 2 00:23:01.315656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jul 2 00:23:01.315662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jul 2 00:23:01.315669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jul 2 00:23:01.315677 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jul 2 00:23:01.315683 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jul 2 00:23:01.315689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jul 2 00:23:01.315696 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jul 2 00:23:01.315702 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jul 2 00:23:01.315708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jul 2 00:23:01.315714 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jul 2 00:23:01.315721 kernel: Zone ranges: Jul 2 00:23:01.315727 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jul 2 00:23:01.315733 kernel: DMA32 empty Jul 2 00:23:01.315739 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 00:23:01.315747 kernel: Movable zone start for each node Jul 2 00:23:01.315756 kernel: Early memory node ranges Jul 2 00:23:01.315763 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jul 2 00:23:01.315770 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Jul 2 00:23:01.315777 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Jul 2 00:23:01.315785 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Jul 2 00:23:01.315792 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Jul 2 00:23:01.315798 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Jul 2 00:23:01.315805 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Jul 2 00:23:01.315811 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Jul 2 00:23:01.315818 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jul 2 00:23:01.315825 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jul 2 00:23:01.315831 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jul 2 00:23:01.315838 kernel: psci: probing for conduit method from ACPI. Jul 2 00:23:01.315845 kernel: psci: PSCIv1.1 detected in firmware. Jul 2 00:23:01.315852 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:23:01.315858 kernel: psci: MIGRATE_INFO_TYPE not supported. Jul 2 00:23:01.315866 kernel: psci: SMC Calling Convention v1.4 Jul 2 00:23:01.315873 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jul 2 00:23:01.315880 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jul 2 00:23:01.315887 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Jul 2 00:23:01.315893 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Jul 2 00:23:01.317957 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 00:23:01.317990 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:23:01.317998 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:23:01.318005 kernel: CPU features: detected: Hardware dirty bit management Jul 2 00:23:01.318012 kernel: CPU features: detected: Spectre-BHB Jul 2 00:23:01.318019 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:23:01.318026 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:23:01.318041 kernel: CPU features: detected: ARM erratum 1418040 Jul 2 00:23:01.318048 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jul 2 00:23:01.318055 kernel: alternatives: applying boot alternatives Jul 2 00:23:01.318063 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:23:01.318071 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:23:01.318078 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:23:01.318085 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:23:01.318092 kernel: Fallback order for Node 0: 0 Jul 2 00:23:01.318099 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jul 2 00:23:01.318105 kernel: Policy zone: Normal Jul 2 00:23:01.318114 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:23:01.318121 kernel: software IO TLB: area num 2. Jul 2 00:23:01.318128 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Jul 2 00:23:01.318135 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Jul 2 00:23:01.318142 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:23:01.318149 kernel: trace event string verifier disabled Jul 2 00:23:01.318156 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:23:01.318164 kernel: rcu: RCU event tracing is enabled. Jul 2 00:23:01.318171 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:23:01.318178 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:23:01.318184 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:23:01.318191 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:23:01.318200 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:23:01.318207 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:23:01.318213 kernel: GICv3: 960 SPIs implemented Jul 2 00:23:01.318220 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:23:01.318227 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:23:01.318233 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 2 00:23:01.318240 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jul 2 00:23:01.318247 kernel: ITS: No ITS available, not enabling LPIs Jul 2 00:23:01.318254 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:23:01.318261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:23:01.318268 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 2 00:23:01.318277 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 2 00:23:01.318284 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 2 00:23:01.318291 kernel: Console: colour dummy device 80x25 Jul 2 00:23:01.318298 kernel: printk: console [tty1] enabled Jul 2 00:23:01.318306 kernel: ACPI: Core revision 20230628 Jul 2 00:23:01.318313 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 2 00:23:01.318320 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:23:01.318327 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:23:01.318334 kernel: SELinux: Initializing. Jul 2 00:23:01.318341 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.318350 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.318357 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:01.318364 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:01.318372 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jul 2 00:23:01.318379 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Jul 2 00:23:01.318386 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 00:23:01.318394 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:23:01.318408 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:23:01.318415 kernel: Remapping and enabling EFI services. Jul 2 00:23:01.318423 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:23:01.318430 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:23:01.318439 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jul 2 00:23:01.318447 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 2 00:23:01.318454 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 2 00:23:01.318462 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:23:01.318469 kernel: SMP: Total of 2 processors activated. Jul 2 00:23:01.318478 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:23:01.318486 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jul 2 00:23:01.318493 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 2 00:23:01.318501 kernel: CPU features: detected: CRC32 instructions Jul 2 00:23:01.318509 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 2 00:23:01.318516 kernel: CPU features: detected: LSE atomic instructions Jul 2 00:23:01.318524 kernel: CPU features: detected: Privileged Access Never Jul 2 00:23:01.318531 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:23:01.318538 kernel: alternatives: applying system-wide alternatives Jul 2 00:23:01.318548 kernel: devtmpfs: initialized Jul 2 00:23:01.318555 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:23:01.318562 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:23:01.318570 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:23:01.318577 kernel: SMBIOS 3.1.0 present. Jul 2 00:23:01.318585 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Jul 2 00:23:01.318592 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:23:01.318599 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:23:01.318607 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:23:01.318616 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:23:01.318624 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:23:01.318631 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Jul 2 00:23:01.318638 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:23:01.318646 kernel: cpuidle: using governor menu Jul 2 00:23:01.318653 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:23:01.318661 kernel: ASID allocator initialised with 32768 entries Jul 2 00:23:01.318669 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:23:01.318676 kernel: Serial: AMBA PL011 UART driver Jul 2 00:23:01.318685 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 2 00:23:01.318692 kernel: Modules: 0 pages in range for non-PLT usage Jul 2 00:23:01.318700 kernel: Modules: 509120 pages in range for PLT usage Jul 2 00:23:01.318707 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:23:01.318714 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:23:01.318722 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:23:01.318729 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 2 00:23:01.318737 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:23:01.318744 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:23:01.318754 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:23:01.318761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 2 00:23:01.318768 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:23:01.318776 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:23:01.318783 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:23:01.318790 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:23:01.318798 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:23:01.318805 kernel: ACPI: Interpreter enabled Jul 2 00:23:01.318813 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:23:01.318822 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jul 2 00:23:01.318829 kernel: printk: console [ttyAMA0] enabled Jul 2 00:23:01.318837 kernel: printk: bootconsole [pl11] disabled Jul 2 00:23:01.318844 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jul 2 00:23:01.318852 kernel: iommu: Default domain type: Translated Jul 2 00:23:01.318860 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:23:01.318867 kernel: efivars: Registered efivars operations Jul 2 00:23:01.318874 kernel: vgaarb: loaded Jul 2 00:23:01.318882 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:23:01.318891 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:23:01.318898 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:23:01.318918 kernel: pnp: PnP ACPI init Jul 2 00:23:01.318925 kernel: pnp: PnP ACPI: found 0 devices Jul 2 00:23:01.318933 kernel: NET: Registered PF_INET protocol family Jul 2 00:23:01.318940 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:23:01.318948 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:23:01.318955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:23:01.318963 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:23:01.318973 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 00:23:01.318980 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:23:01.318988 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.318995 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:23:01.319003 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:23:01.319010 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:23:01.319017 kernel: kvm [1]: HYP mode not available Jul 2 00:23:01.319025 kernel: Initialise system trusted keyrings Jul 2 00:23:01.319032 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:23:01.319041 kernel: Key type asymmetric registered Jul 2 00:23:01.319049 kernel: Asymmetric key parser 'x509' registered Jul 2 00:23:01.319056 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 2 00:23:01.319064 kernel: io scheduler mq-deadline registered Jul 2 00:23:01.319071 kernel: io scheduler kyber registered Jul 2 00:23:01.319078 kernel: io scheduler bfq registered Jul 2 00:23:01.319086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:23:01.319093 kernel: thunder_xcv, ver 1.0 Jul 2 00:23:01.319100 kernel: thunder_bgx, ver 1.0 Jul 2 00:23:01.319107 kernel: nicpf, ver 1.0 Jul 2 00:23:01.319116 kernel: nicvf, ver 1.0 Jul 2 00:23:01.319300 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:23:01.319376 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:23:00 UTC (1719879780) Jul 2 00:23:01.319387 kernel: efifb: probing for efifb Jul 2 00:23:01.319394 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 00:23:01.319402 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 00:23:01.319409 kernel: efifb: scrolling: redraw Jul 2 00:23:01.319419 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 00:23:01.319426 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:23:01.319434 kernel: fb0: EFI VGA frame buffer device Jul 2 00:23:01.319441 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jul 2 00:23:01.319449 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:23:01.319456 kernel: No ACPI PMU IRQ for CPU0 Jul 2 00:23:01.319463 kernel: No ACPI PMU IRQ for CPU1 Jul 2 00:23:01.319470 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jul 2 00:23:01.319478 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 2 00:23:01.319487 kernel: watchdog: Hard watchdog permanently disabled Jul 2 00:23:01.319495 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:23:01.319502 kernel: Segment Routing with IPv6 Jul 2 00:23:01.319510 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:23:01.319517 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:23:01.319525 kernel: Key type dns_resolver registered Jul 2 00:23:01.319532 kernel: registered taskstats version 1 Jul 2 00:23:01.319539 kernel: Loading compiled-in X.509 certificates Jul 2 00:23:01.319547 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: 60660d9c77cbf90f55b5b3c47931cf5941193eaf' Jul 2 00:23:01.319556 kernel: Key type .fscrypt registered Jul 2 00:23:01.319563 kernel: Key type fscrypt-provisioning registered Jul 2 00:23:01.319571 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:23:01.319578 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:23:01.319586 kernel: ima: No architecture policies found Jul 2 00:23:01.319593 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:23:01.319601 kernel: clk: Disabling unused clocks Jul 2 00:23:01.319608 kernel: Freeing unused kernel memory: 39040K Jul 2 00:23:01.319616 kernel: Run /init as init process Jul 2 00:23:01.319625 kernel: with arguments: Jul 2 00:23:01.319632 kernel: /init Jul 2 00:23:01.319639 kernel: with environment: Jul 2 00:23:01.319647 kernel: HOME=/ Jul 2 00:23:01.319654 kernel: TERM=linux Jul 2 00:23:01.319661 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:23:01.319671 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:01.319681 systemd[1]: Detected virtualization microsoft. Jul 2 00:23:01.319691 systemd[1]: Detected architecture arm64. Jul 2 00:23:01.319698 systemd[1]: Running in initrd. Jul 2 00:23:01.319706 systemd[1]: No hostname configured, using default hostname. Jul 2 00:23:01.319714 systemd[1]: Hostname set to . Jul 2 00:23:01.319722 systemd[1]: Initializing machine ID from random generator. Jul 2 00:23:01.319730 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:23:01.319738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:01.319746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:01.319756 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:23:01.319765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:01.319773 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:23:01.319781 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:23:01.319791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:23:01.319799 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:23:01.319807 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:01.319817 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:01.319825 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:01.319833 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:01.319841 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:01.319848 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:01.319856 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:01.319864 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:01.319872 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:23:01.319882 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:23:01.319890 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:01.319898 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:01.321970 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:01.321981 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:01.321990 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:23:01.321998 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:01.322007 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:23:01.322015 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:23:01.322030 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:01.322038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:01.322079 systemd-journald[216]: Collecting audit messages is disabled. Jul 2 00:23:01.322101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:01.322113 systemd-journald[216]: Journal started Jul 2 00:23:01.322132 systemd-journald[216]: Runtime Journal (/run/log/journal/3af3a342cb5d4ca5b402e2adf0095dba) is 8.0M, max 78.6M, 70.6M free. Jul 2 00:23:01.331645 systemd-modules-load[217]: Inserted module 'overlay' Jul 2 00:23:01.348786 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:01.353061 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:01.382655 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:23:01.382686 kernel: Bridge firewalling registered Jul 2 00:23:01.367688 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:01.382629 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 2 00:23:01.396619 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:23:01.410259 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:01.418759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:01.436163 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:01.446122 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:01.462116 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:23:01.490197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:01.499921 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:01.516135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:01.528977 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:23:01.549635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:01.569423 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:23:01.582584 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:01.597115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:01.623717 dracut-cmdline[251]: dracut-dracut-053 Jul 2 00:23:01.623717 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=894d8ea3debe01ca4faf80384c3adbf31dc72d8c1b6ccdad26befbaf28696295 Jul 2 00:23:01.663879 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:01.668292 systemd-resolved[254]: Positive Trust Anchors: Jul 2 00:23:01.668303 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:01.668333 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:01.670564 systemd-resolved[254]: Defaulting to hostname 'linux'. Jul 2 00:23:01.673893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:01.698330 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:01.804925 kernel: SCSI subsystem initialized Jul 2 00:23:01.813934 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:23:01.824934 kernel: iscsi: registered transport (tcp) Jul 2 00:23:01.842768 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:23:01.842795 kernel: QLogic iSCSI HBA Driver Jul 2 00:23:01.876129 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:01.889258 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:23:01.918923 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:23:01.918989 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:23:01.928409 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:23:01.977929 kernel: raid6: neonx8 gen() 15735 MB/s Jul 2 00:23:01.997913 kernel: raid6: neonx4 gen() 15689 MB/s Jul 2 00:23:02.018911 kernel: raid6: neonx2 gen() 13242 MB/s Jul 2 00:23:02.038910 kernel: raid6: neonx1 gen() 10464 MB/s Jul 2 00:23:02.058910 kernel: raid6: int64x8 gen() 6969 MB/s Jul 2 00:23:02.079910 kernel: raid6: int64x4 gen() 7347 MB/s Jul 2 00:23:02.099911 kernel: raid6: int64x2 gen() 6130 MB/s Jul 2 00:23:02.123278 kernel: raid6: int64x1 gen() 5052 MB/s Jul 2 00:23:02.123299 kernel: raid6: using algorithm neonx8 gen() 15735 MB/s Jul 2 00:23:02.148106 kernel: raid6: .... xor() 11954 MB/s, rmw enabled Jul 2 00:23:02.148130 kernel: raid6: using neon recovery algorithm Jul 2 00:23:02.159756 kernel: xor: measuring software checksum speed Jul 2 00:23:02.159770 kernel: 8regs : 19854 MB/sec Jul 2 00:23:02.163482 kernel: 32regs : 19668 MB/sec Jul 2 00:23:02.171020 kernel: arm64_neon : 27170 MB/sec Jul 2 00:23:02.171041 kernel: xor: using function: arm64_neon (27170 MB/sec) Jul 2 00:23:02.221918 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:23:02.231849 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:02.247103 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:02.269709 systemd-udevd[437]: Using default interface naming scheme 'v255'. Jul 2 00:23:02.275125 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:02.294090 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:23:02.309022 dracut-pre-trigger[439]: rd.md=0: removing MD RAID activation Jul 2 00:23:02.338971 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:02.358232 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:02.399958 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:02.424147 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:23:02.452554 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:02.467009 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:02.483412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:02.497588 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:02.515164 kernel: hv_vmbus: Vmbus version:5.3 Jul 2 00:23:02.527919 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:23:02.527977 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:23:02.537074 kernel: PTP clock support registered Jul 2 00:23:02.537125 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 00:23:02.550162 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:23:02.573771 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 00:23:02.573795 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 00:23:02.573806 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 00:23:02.582912 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 00:23:02.575054 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:02.603877 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 00:23:02.575203 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:02.628438 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 00:23:02.628461 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 00:23:02.603959 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:02.874111 kernel: hv_vmbus: registering driver hv_utils Jul 2 00:23:02.874139 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 00:23:02.874149 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 00:23:02.874159 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 00:23:02.874168 kernel: scsi host0: storvsc_host_t Jul 2 00:23:02.874354 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 00:23:02.874378 kernel: scsi host1: storvsc_host_t Jul 2 00:23:02.874466 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 00:23:02.614939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:02.615168 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:02.846522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:02.848591 systemd-resolved[254]: Clock change detected. Flushing caches. Jul 2 00:23:02.890620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:02.951798 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 00:23:02.953411 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 00:23:02.953430 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: VF slot 1 added Jul 2 00:23:02.953647 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 00:23:02.907351 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:02.919543 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:02.972251 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:03.011906 kernel: hv_vmbus: registering driver hv_pci Jul 2 00:23:03.011940 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 00:23:03.076408 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 00:23:03.076531 kernel: hv_pci d3fb7eb2-5c68-483d-b558-57b0d95abe6f: PCI VMBus probing: Using version 0x10004 Jul 2 00:23:03.118294 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 00:23:03.118448 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 00:23:03.118787 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 00:23:03.118911 kernel: hv_pci d3fb7eb2-5c68-483d-b558-57b0d95abe6f: PCI host bridge to bus 5c68:00 Jul 2 00:23:03.119017 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:03.119028 kernel: pci_bus 5c68:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jul 2 00:23:03.119129 kernel: pci_bus 5c68:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 00:23:03.119208 kernel: pci 5c68:00:02.0: [15b3:1018] type 00 class 0x020000 Jul 2 00:23:03.119309 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 00:23:03.119403 kernel: pci 5c68:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 00:23:03.119507 kernel: pci 5c68:00:02.0: enabling Extended Tags Jul 2 00:23:03.119636 kernel: pci 5c68:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5c68:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jul 2 00:23:03.119727 kernel: pci_bus 5c68:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 00:23:03.119807 kernel: pci 5c68:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jul 2 00:23:03.037804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:03.167595 kernel: mlx5_core 5c68:00:02.0: enabling device (0000 -> 0002) Jul 2 00:23:03.394809 kernel: mlx5_core 5c68:00:02.0: firmware version: 16.30.1284 Jul 2 00:23:03.394971 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: VF registering: eth1 Jul 2 00:23:03.395281 kernel: mlx5_core 5c68:00:02.0 eth1: joined to eth0 Jul 2 00:23:03.395381 kernel: mlx5_core 5c68:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jul 2 00:23:03.403595 kernel: mlx5_core 5c68:00:02.0 enP23656s1: renamed from eth1 Jul 2 00:23:03.460814 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 00:23:03.549687 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (500) Jul 2 00:23:03.564379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:23:03.597083 kernel: BTRFS: device fsid 2e7aff7f-b51e-4094-8f16-54690a62fb17 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (481) Jul 2 00:23:03.591613 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 00:23:03.603294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 00:23:03.611811 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 00:23:03.640871 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:23:03.668575 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:03.674578 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:04.685580 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 00:23:04.687089 disk-uuid[598]: The operation has completed successfully. Jul 2 00:23:04.754908 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:23:04.755021 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:23:04.781728 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:23:04.794343 sh[711]: Success Jul 2 00:23:04.824588 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:23:04.999016 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:23:05.007532 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:23:05.021762 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:23:05.054131 kernel: BTRFS info (device dm-0): first mount of filesystem 2e7aff7f-b51e-4094-8f16-54690a62fb17 Jul 2 00:23:05.054185 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:05.064131 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:23:05.070048 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:23:05.074463 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:23:05.544879 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:23:05.550672 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:23:05.570900 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:23:05.578777 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:23:05.613217 kernel: BTRFS info (device sda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:05.613271 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:05.617696 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:23:05.648960 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:23:05.665229 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:23:05.670753 kernel: BTRFS info (device sda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:05.678987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:23:05.695823 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:23:05.716666 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:05.737702 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:05.764775 systemd-networkd[896]: lo: Link UP Jul 2 00:23:05.764786 systemd-networkd[896]: lo: Gained carrier Jul 2 00:23:05.766681 systemd-networkd[896]: Enumeration completed Jul 2 00:23:05.769208 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:05.769750 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:05.769753 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:05.779533 systemd[1]: Reached target network.target - Network. Jul 2 00:23:05.866579 kernel: mlx5_core 5c68:00:02.0 enP23656s1: Link up Jul 2 00:23:05.907582 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: Data path switched to VF: enP23656s1 Jul 2 00:23:05.908295 systemd-networkd[896]: enP23656s1: Link UP Jul 2 00:23:05.908394 systemd-networkd[896]: eth0: Link UP Jul 2 00:23:05.908488 systemd-networkd[896]: eth0: Gained carrier Jul 2 00:23:05.908497 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:05.921162 systemd-networkd[896]: enP23656s1: Gained carrier Jul 2 00:23:05.941683 systemd-networkd[896]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 00:23:06.481945 ignition[885]: Ignition 2.18.0 Jul 2 00:23:06.481957 ignition[885]: Stage: fetch-offline Jul 2 00:23:06.486851 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:06.481997 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.482005 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.482105 ignition[885]: parsed url from cmdline: "" Jul 2 00:23:06.510876 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:23:06.482110 ignition[885]: no config URL provided Jul 2 00:23:06.482115 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:06.482122 ignition[885]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:06.482127 ignition[885]: failed to fetch config: resource requires networking Jul 2 00:23:06.482324 ignition[885]: Ignition finished successfully Jul 2 00:23:06.535017 ignition[906]: Ignition 2.18.0 Jul 2 00:23:06.535024 ignition[906]: Stage: fetch Jul 2 00:23:06.535218 ignition[906]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.535228 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.535343 ignition[906]: parsed url from cmdline: "" Jul 2 00:23:06.535349 ignition[906]: no config URL provided Jul 2 00:23:06.535354 ignition[906]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:06.535361 ignition[906]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:06.535395 ignition[906]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 00:23:06.620811 ignition[906]: GET result: OK Jul 2 00:23:06.620851 ignition[906]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Jul 2 00:23:06.641355 ignition[906]: opening config device: "/dev/sr0" Jul 2 00:23:06.642009 ignition[906]: getting drive status for "/dev/sr0" Jul 2 00:23:06.642068 ignition[906]: drive status: OK Jul 2 00:23:06.642105 ignition[906]: mounting config device Jul 2 00:23:06.642127 ignition[906]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.662184 ignition[906]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.669320 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/07/03 00:00 (1000) Jul 2 00:23:06.662194 ignition[906]: checking for config drive Jul 2 00:23:06.669118 ignition[906]: reading config Jul 2 00:23:06.669472 ignition[906]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.669641 ignition[906]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure3486351601" Jul 2 00:23:06.669911 systemd[1]: tmp-ignition\x2dazure3486351601.mount: Deactivated successfully. Jul 2 00:23:06.669657 ignition[906]: config has been read from custom data Jul 2 00:23:06.674317 unknown[906]: fetched base config from "system" Jul 2 00:23:06.670288 ignition[906]: parsing config with SHA512: 6dd2f975b445b3ee2c4689e52d5aea775e4f0e70c2b73296e7223dcc39edc83bfc8c0021aeb901a601d24796b6bbb301bab94079b49b4062b0d37a958bcf7184 Jul 2 00:23:06.674324 unknown[906]: fetched base config from "system" Jul 2 00:23:06.674704 ignition[906]: fetch: fetch complete Jul 2 00:23:06.674329 unknown[906]: fetched user config from "azure" Jul 2 00:23:06.674709 ignition[906]: fetch: fetch passed Jul 2 00:23:06.678607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:23:06.674748 ignition[906]: Ignition finished successfully Jul 2 00:23:06.705830 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:23:06.724385 ignition[914]: Ignition 2.18.0 Jul 2 00:23:06.730066 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:23:06.724407 ignition[914]: Stage: kargs Jul 2 00:23:06.749047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:23:06.724725 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.724763 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.727040 ignition[914]: kargs: kargs passed Jul 2 00:23:06.727128 ignition[914]: Ignition finished successfully Jul 2 00:23:06.793914 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:23:06.782062 ignition[921]: Ignition 2.18.0 Jul 2 00:23:06.804715 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:06.782069 ignition[921]: Stage: disks Jul 2 00:23:06.816574 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:23:06.782386 ignition[921]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:06.827613 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:06.782402 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:06.839843 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:06.789426 ignition[921]: disks: disks passed Jul 2 00:23:06.849867 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:06.789527 ignition[921]: Ignition finished successfully Jul 2 00:23:06.882833 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:23:07.013960 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 00:23:07.029685 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:23:07.047839 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:23:07.105622 kernel: EXT4-fs (sda9): mounted filesystem 95038baa-e9f1-4207-86a5-38a4ce3cff7d r/w with ordered data mode. Quota mode: none. Jul 2 00:23:07.106682 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:23:07.111610 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:07.151633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:07.158696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:23:07.179080 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 00:23:07.199624 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (942) Jul 2 00:23:07.199649 kernel: BTRFS info (device sda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:07.199660 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:07.199670 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:23:07.200571 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:23:07.213604 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:23:07.213660 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:07.238773 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:07.243914 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:23:07.257779 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:23:07.332670 systemd-networkd[896]: eth0: Gained IPv6LL Jul 2 00:23:07.333019 systemd-networkd[896]: enP23656s1: Gained IPv6LL Jul 2 00:23:08.012921 coreos-metadata[944]: Jul 02 00:23:08.012 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:23:08.022432 coreos-metadata[944]: Jul 02 00:23:08.016 INFO Fetch successful Jul 2 00:23:08.022432 coreos-metadata[944]: Jul 02 00:23:08.016 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:23:08.042184 coreos-metadata[944]: Jul 02 00:23:08.028 INFO Fetch successful Jul 2 00:23:08.042184 coreos-metadata[944]: Jul 02 00:23:08.041 INFO wrote hostname ci-3975.1.1-a-3e8d94ffa6 to /sysroot/etc/hostname Jul 2 00:23:08.042799 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:23:08.316955 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:23:08.360897 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:23:08.368078 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:23:08.385258 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:23:09.286483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:09.299971 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:23:09.311462 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:23:09.332829 kernel: BTRFS info (device sda6): last unmount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:09.326887 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:23:09.356241 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:23:09.367757 ignition[1061]: INFO : Ignition 2.18.0 Jul 2 00:23:09.367757 ignition[1061]: INFO : Stage: mount Jul 2 00:23:09.367757 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:09.367757 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:09.395503 ignition[1061]: INFO : mount: mount passed Jul 2 00:23:09.395503 ignition[1061]: INFO : Ignition finished successfully Jul 2 00:23:09.375871 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:23:09.399757 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:23:09.420848 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:09.452199 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1072) Jul 2 00:23:09.452265 kernel: BTRFS info (device sda6): first mount of filesystem f333e8f9-4cd9-418a-86af-1531564c69c1 Jul 2 00:23:09.458564 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:23:09.462586 kernel: BTRFS info (device sda6): using free space tree Jul 2 00:23:09.469593 kernel: BTRFS info (device sda6): auto enabling async discard Jul 2 00:23:09.470906 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:09.494817 ignition[1090]: INFO : Ignition 2.18.0 Jul 2 00:23:09.494817 ignition[1090]: INFO : Stage: files Jul 2 00:23:09.502799 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:09.502799 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:09.502799 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:23:09.567928 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:23:09.567928 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:23:09.649939 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:23:09.657577 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:23:09.657577 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:23:09.650337 unknown[1090]: wrote ssh authorized keys file for user: core Jul 2 00:23:09.729562 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:23:09.740159 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:23:10.157720 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 00:23:10.379566 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:10.390611 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 00:23:10.796030 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 00:23:11.001589 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:23:11.001589 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:11.022662 ignition[1090]: INFO : files: files passed Jul 2 00:23:11.022662 ignition[1090]: INFO : Ignition finished successfully Jul 2 00:23:11.023050 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:23:11.062384 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:23:11.078792 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:23:11.108079 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:23:11.175185 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:11.175185 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:11.108196 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:23:11.207857 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:11.149687 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:11.157479 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:23:11.184856 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:23:11.233151 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:23:11.233301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:23:11.244480 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:23:11.255524 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:23:11.268084 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:23:11.270787 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:23:11.303408 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:11.313833 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:23:11.336824 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:11.349982 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:11.356813 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:23:11.368024 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:23:11.368204 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:11.384398 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:23:11.396436 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:23:11.406637 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:23:11.417173 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:11.429344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:11.441884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:23:11.453631 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:11.466391 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:23:11.478631 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:23:11.489465 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:23:11.498932 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:23:11.499112 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:11.514817 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:11.526046 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:11.538294 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:23:11.538400 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:11.551388 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:23:11.551574 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:11.569540 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:23:11.569735 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:11.581757 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:23:11.581910 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:23:11.592400 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 00:23:11.592549 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 00:23:11.624710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:23:11.641083 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:23:11.661029 ignition[1142]: INFO : Ignition 2.18.0 Jul 2 00:23:11.661029 ignition[1142]: INFO : Stage: umount Jul 2 00:23:11.661029 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:11.661029 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 00:23:11.661029 ignition[1142]: INFO : umount: umount passed Jul 2 00:23:11.661029 ignition[1142]: INFO : Ignition finished successfully Jul 2 00:23:11.641347 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:11.665869 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:23:11.678440 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:23:11.678632 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:11.691217 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:23:11.691342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:11.706049 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:23:11.706139 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:23:11.718566 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:23:11.718847 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:23:11.730146 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:23:11.730208 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:23:11.736341 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:23:11.736388 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:23:11.742324 systemd[1]: Stopped target network.target - Network. Jul 2 00:23:11.757506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:23:11.757611 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:11.770250 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:23:11.775347 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:23:11.780567 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:11.787985 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:23:11.798807 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:23:11.809412 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:23:11.809469 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:11.822730 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:23:11.822793 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:11.833847 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:23:11.833915 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:23:11.844306 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:23:11.844359 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:11.855074 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:23:11.865332 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:23:11.880498 systemd-networkd[896]: eth0: DHCPv6 lease lost Jul 2 00:23:11.882603 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:23:12.100375 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: Data path switched from VF: enP23656s1 Jul 2 00:23:11.883274 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:23:11.883357 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:23:11.899099 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:23:11.899205 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:23:11.911081 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:23:11.912599 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:23:11.924128 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:23:11.924188 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:11.952806 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:23:11.958156 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:23:11.958237 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:11.965532 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:23:11.965608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:11.975929 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:23:11.975981 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:11.986500 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:23:11.986570 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:11.999055 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:12.049110 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:23:12.049296 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:12.062884 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:23:12.062927 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:12.084972 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:23:12.085018 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:12.095744 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:23:12.095803 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:12.112048 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:23:12.112102 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:12.135048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:12.135117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:12.171813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:23:12.185719 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:23:12.185785 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:12.201028 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:12.201089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:12.212742 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:23:12.212835 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:23:12.223521 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:23:12.223614 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:23:13.020506 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:23:13.020664 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:23:13.031284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:23:13.041336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:23:13.041409 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:13.064820 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:23:13.077964 systemd[1]: Switching root. Jul 2 00:23:13.140682 systemd-journald[216]: Journal stopped Jul 2 00:23:17.883737 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 2 00:23:17.883776 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:23:17.883787 kernel: SELinux: policy capability open_perms=1 Jul 2 00:23:17.883799 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:23:17.883807 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:23:17.883815 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:23:17.883826 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:23:17.883834 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:23:17.883842 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:23:17.883850 kernel: audit: type=1403 audit(1719879794.487:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:23:17.883861 systemd[1]: Successfully loaded SELinux policy in 151.233ms. Jul 2 00:23:17.883871 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.310ms. Jul 2 00:23:17.883880 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:17.883890 systemd[1]: Detected virtualization microsoft. Jul 2 00:23:17.883901 systemd[1]: Detected architecture arm64. Jul 2 00:23:17.883910 systemd[1]: Detected first boot. Jul 2 00:23:17.883920 systemd[1]: Hostname set to . Jul 2 00:23:17.883928 systemd[1]: Initializing machine ID from random generator. Jul 2 00:23:17.883937 zram_generator::config[1183]: No configuration found. Jul 2 00:23:17.883947 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:23:17.883956 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:23:17.883967 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:23:17.883976 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:17.883986 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:23:17.883995 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:23:17.884005 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:23:17.884014 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:23:17.884024 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:23:17.884036 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:23:17.884045 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:23:17.884055 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:23:17.884064 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:17.884073 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:17.884083 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:23:17.884092 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:23:17.884102 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:23:17.884113 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:17.884123 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 2 00:23:17.884132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:17.884141 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:23:17.884153 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:23:17.884162 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:17.884172 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:23:17.884182 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:17.884193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:17.884202 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:17.884212 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:17.884221 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:23:17.884232 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:23:17.884241 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:17.884251 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:17.884262 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:17.884272 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:23:17.884282 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:23:17.884292 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:23:17.884301 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:23:17.884311 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:23:17.884322 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:23:17.884332 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:23:17.884342 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:23:17.884352 systemd[1]: Reached target machines.target - Containers. Jul 2 00:23:17.884362 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:23:17.884372 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:17.884382 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:17.884391 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:23:17.884402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:17.884412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:17.884421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:17.884431 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:23:17.884445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:17.884455 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:23:17.884464 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:23:17.884474 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:23:17.884486 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:23:17.884496 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:23:17.884505 kernel: fuse: init (API version 7.39) Jul 2 00:23:17.884513 kernel: loop: module loaded Jul 2 00:23:17.884522 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:17.884532 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:17.884541 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:23:17.884551 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:23:17.884590 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:17.884604 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:23:17.884614 systemd[1]: Stopped verity-setup.service. Jul 2 00:23:17.884650 systemd-journald[1285]: Collecting audit messages is disabled. Jul 2 00:23:17.884671 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:23:17.884683 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:23:17.884694 systemd-journald[1285]: Journal started Jul 2 00:23:17.884714 systemd-journald[1285]: Runtime Journal (/run/log/journal/8a0f42e303154015a5e51f487e86d926) is 8.0M, max 78.6M, 70.6M free. Jul 2 00:23:16.838659 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:23:16.936903 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 2 00:23:16.937401 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:23:16.937818 systemd[1]: systemd-journald.service: Consumed 3.160s CPU time. Jul 2 00:23:17.894570 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:17.911576 kernel: ACPI: bus type drm_connector registered Jul 2 00:23:17.907999 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:23:17.913660 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:23:17.919940 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:23:17.926210 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:23:17.931898 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:23:17.938597 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:17.945919 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:23:17.946060 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:23:17.952810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:17.952946 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:17.959398 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:17.959533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:17.966300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:17.966448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:17.973736 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:23:17.973865 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:23:17.980182 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:17.980306 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:17.987988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:17.996584 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:23:18.003853 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:23:18.011135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:18.027601 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:23:18.038675 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:23:18.048175 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:23:18.054477 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:23:18.054520 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:18.062936 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:23:18.080738 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:23:18.088800 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:23:18.094813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:18.103995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:23:18.111764 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:23:18.118866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:18.120012 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:23:18.126319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:18.130876 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:18.139790 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:23:18.152857 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:23:18.164926 systemd-journald[1285]: Time spent on flushing to /var/log/journal/8a0f42e303154015a5e51f487e86d926 is 22.378ms for 906 entries. Jul 2 00:23:18.164926 systemd-journald[1285]: System Journal (/var/log/journal/8a0f42e303154015a5e51f487e86d926) is 8.0M, max 2.6G, 2.6G free. Jul 2 00:23:18.210054 systemd-journald[1285]: Received client request to flush runtime journal. Jul 2 00:23:18.178770 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:23:18.190358 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:23:18.203130 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:23:18.211317 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:23:18.219879 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:23:18.230912 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:18.237494 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:23:18.256662 kernel: loop0: detected capacity change from 0 to 113672 Jul 2 00:23:18.256760 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:23:18.260043 udevadm[1319]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:23:18.261229 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:23:18.276525 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:23:18.321688 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:23:18.323605 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:23:18.500159 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:23:18.510864 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:18.580822 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 2 00:23:18.580841 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Jul 2 00:23:18.585146 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:18.813621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:23:18.836602 kernel: loop1: detected capacity change from 0 to 194096 Jul 2 00:23:18.886585 kernel: loop2: detected capacity change from 0 to 59672 Jul 2 00:23:19.188585 kernel: loop3: detected capacity change from 0 to 56592 Jul 2 00:23:19.635584 kernel: loop4: detected capacity change from 0 to 113672 Jul 2 00:23:19.723602 kernel: loop5: detected capacity change from 0 to 194096 Jul 2 00:23:19.733597 kernel: loop6: detected capacity change from 0 to 59672 Jul 2 00:23:19.742569 kernel: loop7: detected capacity change from 0 to 56592 Jul 2 00:23:19.744393 (sd-merge)[1341]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 2 00:23:19.744821 (sd-merge)[1341]: Merged extensions into '/usr'. Jul 2 00:23:19.748469 systemd[1]: Reloading requested from client PID 1316 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:23:19.748767 systemd[1]: Reloading... Jul 2 00:23:19.839725 zram_generator::config[1374]: No configuration found. Jul 2 00:23:19.953084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:20.010356 systemd[1]: Reloading finished in 261 ms. Jul 2 00:23:20.047702 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:23:20.055001 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:23:20.070749 systemd[1]: Starting ensure-sysext.service... Jul 2 00:23:20.076355 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:20.085796 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:20.103211 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:23:20.103481 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:23:20.104154 systemd-tmpfiles[1422]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:23:20.104368 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Jul 2 00:23:20.104419 systemd-tmpfiles[1422]: ACLs are not supported, ignoring. Jul 2 00:23:20.112162 systemd[1]: Reloading requested from client PID 1421 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:23:20.112180 systemd[1]: Reloading... Jul 2 00:23:20.118667 systemd-tmpfiles[1422]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:20.118680 systemd-tmpfiles[1422]: Skipping /boot Jul 2 00:23:20.125709 systemd-udevd[1423]: Using default interface naming scheme 'v255'. Jul 2 00:23:20.133260 systemd-tmpfiles[1422]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:20.133280 systemd-tmpfiles[1422]: Skipping /boot Jul 2 00:23:20.193592 zram_generator::config[1449]: No configuration found. Jul 2 00:23:20.295655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:20.356879 systemd[1]: Reloading finished in 244 ms. Jul 2 00:23:20.368265 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:20.386386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:20.460722 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1519) Jul 2 00:23:20.463928 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:20.512019 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:23:20.530789 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:23:20.552796 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:20.575628 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:23:20.575717 kernel: hv_vmbus: registering driver hv_balloon Jul 2 00:23:20.570207 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:20.581353 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:23:20.593269 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 00:23:20.601605 kernel: hv_balloon: Memory hot add disabled on ARM64 Jul 2 00:23:20.618046 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 2 00:23:20.634695 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 00:23:20.634799 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 00:23:20.635701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:20.645886 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 00:23:20.646000 kernel: Console: switching to colour dummy device 80x25 Jul 2 00:23:20.652190 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:20.665211 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 00:23:20.670210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:20.681349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:20.692882 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:20.699746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:20.700462 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:23:20.713151 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:23:20.724009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:20.741552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:20.743827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:20.754367 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:20.754538 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:20.764170 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:20.764623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:20.774784 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:20.777797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:20.798743 systemd[1]: Finished ensure-sysext.service. Jul 2 00:23:20.814056 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:23:20.829594 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:23:20.845416 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:23:20.845633 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1506) Jul 2 00:23:20.859999 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:20.860150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:20.865049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:20.867644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:20.901825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:20.934340 augenrules[1619]: No rules Jul 2 00:23:20.935288 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:20.963965 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 00:23:20.982727 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:23:21.027205 systemd-resolved[1555]: Positive Trust Anchors: Jul 2 00:23:21.027552 systemd-resolved[1555]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:21.027914 systemd-resolved[1555]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:21.037721 systemd-resolved[1555]: Using system hostname 'ci-3975.1.1-a-3e8d94ffa6'. Jul 2 00:23:21.042871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:21.050661 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:21.051083 systemd-networkd[1552]: lo: Link UP Jul 2 00:23:21.051087 systemd-networkd[1552]: lo: Gained carrier Jul 2 00:23:21.055859 systemd-networkd[1552]: Enumeration completed Jul 2 00:23:21.056577 systemd-networkd[1552]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:21.056684 systemd-networkd[1552]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:21.060852 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:21.068156 systemd[1]: Reached target network.target - Network. Jul 2 00:23:21.081799 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:23:21.090610 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:23:21.099072 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:23:21.110389 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:23:21.124764 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:23:21.131991 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:23:21.152590 kernel: mlx5_core 5c68:00:02.0 enP23656s1: Link up Jul 2 00:23:21.179599 kernel: hv_netvsc 000d3af9-c84b-000d-3af9-c84b000d3af9 eth0: Data path switched to VF: enP23656s1 Jul 2 00:23:21.182396 systemd-networkd[1552]: enP23656s1: Link UP Jul 2 00:23:21.182787 systemd-networkd[1552]: eth0: Link UP Jul 2 00:23:21.182989 systemd-networkd[1552]: eth0: Gained carrier Jul 2 00:23:21.183101 systemd-networkd[1552]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:21.187933 systemd-networkd[1552]: enP23656s1: Gained carrier Jul 2 00:23:21.196635 systemd-networkd[1552]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 00:23:21.212035 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:21.238197 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:23:21.245722 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:21.257739 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:23:21.263733 lvm[1642]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:21.293177 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:23:21.660606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:22.756715 systemd-networkd[1552]: eth0: Gained IPv6LL Jul 2 00:23:22.758715 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:23:22.767491 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:23:22.884722 systemd-networkd[1552]: enP23656s1: Gained IPv6LL Jul 2 00:23:23.944254 ldconfig[1311]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:23:23.954089 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:23:23.964727 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:23:23.978319 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:23:23.985248 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:23.991255 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:23:23.997869 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:23:24.005549 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:23:24.011744 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:23:24.018837 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:23:24.025846 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:23:24.025882 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:24.030959 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:24.073895 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:23:24.081631 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:23:24.091611 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:23:24.097884 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:23:24.103950 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:24.109156 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:24.114229 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:24.114257 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:24.121713 systemd[1]: Starting chronyd.service - NTP client/server... Jul 2 00:23:24.129720 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:23:24.145142 (chronyd)[1653]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jul 2 00:23:24.148725 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:23:24.155740 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:23:24.169540 chronyd[1661]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jul 2 00:23:24.170745 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:23:24.182752 jq[1660]: false Jul 2 00:23:24.183308 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:23:24.189372 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:23:24.191719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:24.209349 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:23:24.219048 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:23:24.224825 chronyd[1661]: Timezone right/UTC failed leap second check, ignoring Jul 2 00:23:24.225062 chronyd[1661]: Loaded seccomp filter (level 2) Jul 2 00:23:24.228761 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 00:23:24.240898 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:23:24.241639 extend-filesystems[1662]: Found loop4 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found loop5 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found loop6 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found loop7 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda1 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda2 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda3 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found usr Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda4 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda6 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda7 Jul 2 00:23:24.241639 extend-filesystems[1662]: Found sda9 Jul 2 00:23:24.241639 extend-filesystems[1662]: Checking size of /dev/sda9 Jul 2 00:23:24.473007 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1701) Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.390 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.402 INFO Fetch successful Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.402 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.410 INFO Fetch successful Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.410 INFO Fetching http://168.63.129.16/machine/af87bc27-203e-47c2-9b9d-20668f936d68/865a3729%2Da110%2D4937%2Dbf0e%2D7a9c7027ed81.%5Fci%2D3975.1.1%2Da%2D3e8d94ffa6?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.412 INFO Fetch successful Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.413 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 2 00:23:24.473093 coreos-metadata[1655]: Jul 02 00:23:24.433 INFO Fetch successful Jul 2 00:23:24.253983 dbus-daemon[1656]: [system] SELinux support is enabled Jul 2 00:23:24.488913 extend-filesystems[1662]: Old size kept for /dev/sda9 Jul 2 00:23:24.488913 extend-filesystems[1662]: Found sr0 Jul 2 00:23:24.255800 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:23:24.276681 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:23:24.290891 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:23:24.291412 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:23:24.530262 update_engine[1684]: I0702 00:23:24.456813 1684 main.cc:92] Flatcar Update Engine starting Jul 2 00:23:24.530262 update_engine[1684]: I0702 00:23:24.463884 1684 update_check_scheduler.cc:74] Next update check in 2m31s Jul 2 00:23:24.302847 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:23:24.533481 jq[1689]: true Jul 2 00:23:24.329985 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:23:24.341472 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:23:24.352990 systemd[1]: Started chronyd.service - NTP client/server. Jul 2 00:23:24.377998 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:23:24.378208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:23:24.378482 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:23:24.378648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:23:24.410018 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:23:24.410194 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:23:24.436598 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:23:24.455663 systemd-logind[1680]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:23:24.463633 systemd-logind[1680]: New seat seat0. Jul 2 00:23:24.477894 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:23:24.496237 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:23:24.498610 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:23:24.566760 jq[1733]: true Jul 2 00:23:24.569989 (ntainerd)[1736]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:23:24.601964 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:23:24.618779 tar[1723]: linux-arm64/helm Jul 2 00:23:24.621519 dbus-daemon[1656]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:23:24.636368 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:23:24.646791 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:23:24.646996 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:23:24.647118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:23:24.660759 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:23:24.660885 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:23:24.681834 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:23:24.756844 bash[1774]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:24.760656 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:23:24.771388 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 00:23:24.937415 locksmithd[1775]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:23:25.240079 containerd[1736]: time="2024-07-02T00:23:25.239922300Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:23:25.276592 tar[1723]: linux-arm64/LICENSE Jul 2 00:23:25.276592 tar[1723]: linux-arm64/README.md Jul 2 00:23:25.286896 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 00:23:25.302838 containerd[1736]: time="2024-07-02T00:23:25.302783260Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:23:25.302838 containerd[1736]: time="2024-07-02T00:23:25.302843700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.304356 containerd[1736]: time="2024-07-02T00:23:25.304227540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.304356 containerd[1736]: time="2024-07-02T00:23:25.304270020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.304538 containerd[1736]: time="2024-07-02T00:23:25.304504700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.304538 containerd[1736]: time="2024-07-02T00:23:25.304534940Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.305713300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.305783060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.305796780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.305851980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.306041860Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.306060580Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.306070260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.306180940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.306195340Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.306244500Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:23:25.306399 containerd[1736]: time="2024-07-02T00:23:25.306255100Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:23:25.322374 containerd[1736]: time="2024-07-02T00:23:25.322329580Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:23:25.322607 containerd[1736]: time="2024-07-02T00:23:25.322590140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:23:25.322686 containerd[1736]: time="2024-07-02T00:23:25.322673020Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:23:25.322808 containerd[1736]: time="2024-07-02T00:23:25.322791300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:23:25.326089 containerd[1736]: time="2024-07-02T00:23:25.326044740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:23:25.326089 containerd[1736]: time="2024-07-02T00:23:25.326079700Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:23:25.326089 containerd[1736]: time="2024-07-02T00:23:25.326096060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:23:25.326300 containerd[1736]: time="2024-07-02T00:23:25.326262220Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:23:25.326300 containerd[1736]: time="2024-07-02T00:23:25.326287900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:23:25.326346 containerd[1736]: time="2024-07-02T00:23:25.326304820Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:23:25.326346 containerd[1736]: time="2024-07-02T00:23:25.326320140Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:23:25.326346 containerd[1736]: time="2024-07-02T00:23:25.326340340Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326358660Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326372340Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326385580Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326399420Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326412220Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326424980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326439260Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:23:25.326920 containerd[1736]: time="2024-07-02T00:23:25.326544060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:23:25.327862 containerd[1736]: time="2024-07-02T00:23:25.327832220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.327871340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.327888180Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.327912460Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.327971740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.327986620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.327999460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.328011740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.328025100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.328037060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.328048580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.328059900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328196 containerd[1736]: time="2024-07-02T00:23:25.328077700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328230620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328251540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328264100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328276700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328289700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328304380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328317580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.328432 containerd[1736]: time="2024-07-02T00:23:25.328341020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:23:25.331118 containerd[1736]: time="2024-07-02T00:23:25.330612500Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:23:25.331118 containerd[1736]: time="2024-07-02T00:23:25.330690860Z" level=info msg="Connect containerd service" Jul 2 00:23:25.331118 containerd[1736]: time="2024-07-02T00:23:25.330730940Z" level=info msg="using legacy CRI server" Jul 2 00:23:25.331118 containerd[1736]: time="2024-07-02T00:23:25.330738020Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:23:25.331118 containerd[1736]: time="2024-07-02T00:23:25.330840580Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331432300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331492180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331511580Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331521420Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331534220Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331590660Z" level=info msg="Start subscribing containerd event" Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331660980Z" level=info msg="Start recovering state" Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331740820Z" level=info msg="Start event monitor" Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331754260Z" level=info msg="Start snapshots syncer" Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331763340Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:23:25.332005 containerd[1736]: time="2024-07-02T00:23:25.331771660Z" level=info msg="Start streaming server" Jul 2 00:23:25.340470 containerd[1736]: time="2024-07-02T00:23:25.333744500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:23:25.340470 containerd[1736]: time="2024-07-02T00:23:25.333797140Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:23:25.340470 containerd[1736]: time="2024-07-02T00:23:25.333848820Z" level=info msg="containerd successfully booted in 0.099482s" Jul 2 00:23:25.333955 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:23:25.476747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:25.485122 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:25.983403 kubelet[1798]: E0702 00:23:25.983312 1798 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:25.986337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:25.986667 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:26.193850 sshd_keygen[1688]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:23:26.213687 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:23:26.225863 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:23:26.232841 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 2 00:23:26.238787 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:23:26.238984 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:23:26.259385 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:23:26.268032 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:23:26.275822 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 2 00:23:26.296004 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:23:26.302891 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 2 00:23:26.309541 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:23:26.314729 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:23:26.325995 systemd[1]: Startup finished in 685ms (kernel) + 13.389s (initrd) + 11.988s (userspace) = 26.064s. Jul 2 00:23:26.520452 login[1829]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Jul 2 00:23:26.521961 login[1830]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:23:26.531378 systemd-logind[1680]: New session 2 of user core. Jul 2 00:23:26.532994 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:23:26.543855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:23:26.555322 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:23:26.561923 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:23:26.572733 (systemd)[1837]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:26.710206 systemd[1837]: Queued start job for default target default.target. Jul 2 00:23:26.715108 systemd[1837]: Created slice app.slice - User Application Slice. Jul 2 00:23:26.715382 systemd[1837]: Reached target paths.target - Paths. Jul 2 00:23:26.715459 systemd[1837]: Reached target timers.target - Timers. Jul 2 00:23:26.716908 systemd[1837]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:23:26.727507 systemd[1837]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:23:26.727601 systemd[1837]: Reached target sockets.target - Sockets. Jul 2 00:23:26.727615 systemd[1837]: Reached target basic.target - Basic System. Jul 2 00:23:26.727661 systemd[1837]: Reached target default.target - Main User Target. Jul 2 00:23:26.727694 systemd[1837]: Startup finished in 148ms. Jul 2 00:23:26.727830 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:23:26.733756 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:23:27.333934 waagent[1826]: 2024-07-02T00:23:27.333831Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 2 00:23:27.339791 waagent[1826]: 2024-07-02T00:23:27.339708Z INFO Daemon Daemon OS: flatcar 3975.1.1 Jul 2 00:23:27.344335 waagent[1826]: 2024-07-02T00:23:27.344267Z INFO Daemon Daemon Python: 3.11.9 Jul 2 00:23:27.348885 waagent[1826]: 2024-07-02T00:23:27.348685Z INFO Daemon Daemon Run daemon Jul 2 00:23:27.352771 waagent[1826]: 2024-07-02T00:23:27.352680Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.1.1' Jul 2 00:23:27.361487 waagent[1826]: 2024-07-02T00:23:27.361416Z INFO Daemon Daemon Using waagent for provisioning Jul 2 00:23:27.366950 waagent[1826]: 2024-07-02T00:23:27.366894Z INFO Daemon Daemon Activate resource disk Jul 2 00:23:27.371688 waagent[1826]: 2024-07-02T00:23:27.371628Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 00:23:27.382904 waagent[1826]: 2024-07-02T00:23:27.382828Z INFO Daemon Daemon Found device: None Jul 2 00:23:27.387502 waagent[1826]: 2024-07-02T00:23:27.387438Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 00:23:27.396380 waagent[1826]: 2024-07-02T00:23:27.396307Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 00:23:27.409577 waagent[1826]: 2024-07-02T00:23:27.409490Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 00:23:27.415215 waagent[1826]: 2024-07-02T00:23:27.415150Z INFO Daemon Daemon Running default provisioning handler Jul 2 00:23:27.426896 waagent[1826]: 2024-07-02T00:23:27.426805Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jul 2 00:23:27.440754 waagent[1826]: 2024-07-02T00:23:27.440678Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 00:23:27.450186 waagent[1826]: 2024-07-02T00:23:27.450113Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 00:23:27.455166 waagent[1826]: 2024-07-02T00:23:27.455104Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 00:23:27.473244 waagent[1826]: 2024-07-02T00:23:27.470040Z INFO Daemon Daemon Successfully mounted dvd Jul 2 00:23:27.497376 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 00:23:27.498867 waagent[1826]: 2024-07-02T00:23:27.498523Z INFO Daemon Daemon Detect protocol endpoint Jul 2 00:23:27.503493 waagent[1826]: 2024-07-02T00:23:27.503429Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 00:23:27.509019 waagent[1826]: 2024-07-02T00:23:27.508957Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 00:23:27.515608 waagent[1826]: 2024-07-02T00:23:27.515538Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 00:23:27.521188 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 00:23:27.522163 waagent[1826]: 2024-07-02T00:23:27.521725Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 00:23:27.526428 systemd-logind[1680]: New session 1 of user core. Jul 2 00:23:27.527589 waagent[1826]: 2024-07-02T00:23:27.526871Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 00:23:27.533793 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:23:27.542154 waagent[1826]: 2024-07-02T00:23:27.541797Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 00:23:27.548776 waagent[1826]: 2024-07-02T00:23:27.548729Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 00:23:27.555186 waagent[1826]: 2024-07-02T00:23:27.554412Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 00:23:28.002702 waagent[1826]: 2024-07-02T00:23:28.002597Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 00:23:28.009667 waagent[1826]: 2024-07-02T00:23:28.009595Z INFO Daemon Daemon Forcing an update of the goal state. Jul 2 00:23:28.018752 waagent[1826]: 2024-07-02T00:23:28.018701Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 00:23:28.062338 waagent[1826]: 2024-07-02T00:23:28.062289Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jul 2 00:23:28.068587 waagent[1826]: 2024-07-02T00:23:28.068520Z INFO Daemon Jul 2 00:23:28.071714 waagent[1826]: 2024-07-02T00:23:28.071664Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 16420eed-c5d3-4ce1-853c-a40a8e1a740e eTag: 13712502944570608559 source: Fabric] Jul 2 00:23:28.082982 waagent[1826]: 2024-07-02T00:23:28.082934Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 2 00:23:28.090075 waagent[1826]: 2024-07-02T00:23:28.090025Z INFO Daemon Jul 2 00:23:28.092902 waagent[1826]: 2024-07-02T00:23:28.092851Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 2 00:23:28.103535 waagent[1826]: 2024-07-02T00:23:28.103493Z INFO Daemon Daemon Downloading artifacts profile blob Jul 2 00:23:28.192365 waagent[1826]: 2024-07-02T00:23:28.192266Z INFO Daemon Downloaded certificate {'thumbprint': '7784C3B174F1228D410C1E83C73869537D169426', 'hasPrivateKey': False} Jul 2 00:23:28.202529 waagent[1826]: 2024-07-02T00:23:28.202477Z INFO Daemon Downloaded certificate {'thumbprint': '74A76E1E8250831D53BB4A51A99FBA2614D3846F', 'hasPrivateKey': True} Jul 2 00:23:28.212511 waagent[1826]: 2024-07-02T00:23:28.212460Z INFO Daemon Fetch goal state completed Jul 2 00:23:28.223476 waagent[1826]: 2024-07-02T00:23:28.223419Z INFO Daemon Daemon Starting provisioning Jul 2 00:23:28.228637 waagent[1826]: 2024-07-02T00:23:28.228572Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 00:23:28.233393 waagent[1826]: 2024-07-02T00:23:28.233341Z INFO Daemon Daemon Set hostname [ci-3975.1.1-a-3e8d94ffa6] Jul 2 00:23:28.265588 waagent[1826]: 2024-07-02T00:23:28.265015Z INFO Daemon Daemon Publish hostname [ci-3975.1.1-a-3e8d94ffa6] Jul 2 00:23:28.271365 waagent[1826]: 2024-07-02T00:23:28.271295Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 00:23:28.277640 waagent[1826]: 2024-07-02T00:23:28.277580Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 00:23:28.354029 systemd-networkd[1552]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:28.354586 systemd-networkd[1552]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:28.354639 systemd-networkd[1552]: eth0: DHCP lease lost Jul 2 00:23:28.355101 waagent[1826]: 2024-07-02T00:23:28.355010Z INFO Daemon Daemon Create user account if not exists Jul 2 00:23:28.360607 waagent[1826]: 2024-07-02T00:23:28.360524Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 00:23:28.366579 waagent[1826]: 2024-07-02T00:23:28.366250Z INFO Daemon Daemon Configure sudoer Jul 2 00:23:28.367641 systemd-networkd[1552]: eth0: DHCPv6 lease lost Jul 2 00:23:28.370974 waagent[1826]: 2024-07-02T00:23:28.370899Z INFO Daemon Daemon Configure sshd Jul 2 00:23:28.375328 waagent[1826]: 2024-07-02T00:23:28.375265Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 2 00:23:28.387951 waagent[1826]: 2024-07-02T00:23:28.387880Z INFO Daemon Daemon Deploy ssh public key. Jul 2 00:23:28.402648 systemd-networkd[1552]: eth0: DHCPv4 address 10.200.20.12/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jul 2 00:23:28.407290 waagent[1826]: 2024-07-02T00:23:28.407195Z INFO Daemon Daemon Decode custom data Jul 2 00:23:28.411993 waagent[1826]: 2024-07-02T00:23:28.411934Z INFO Daemon Daemon Save custom data Jul 2 00:23:29.585696 waagent[1826]: 2024-07-02T00:23:29.585636Z INFO Daemon Daemon Provisioning complete Jul 2 00:23:29.603166 waagent[1826]: 2024-07-02T00:23:29.603112Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 00:23:29.610517 waagent[1826]: 2024-07-02T00:23:29.610296Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 00:23:29.621645 waagent[1826]: 2024-07-02T00:23:29.621579Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 2 00:23:29.758679 waagent[1884]: 2024-07-02T00:23:29.758119Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 2 00:23:29.758679 waagent[1884]: 2024-07-02T00:23:29.758276Z INFO ExtHandler ExtHandler OS: flatcar 3975.1.1 Jul 2 00:23:29.758679 waagent[1884]: 2024-07-02T00:23:29.758327Z INFO ExtHandler ExtHandler Python: 3.11.9 Jul 2 00:23:30.373510 waagent[1884]: 2024-07-02T00:23:30.373353Z INFO ExtHandler ExtHandler Distro: flatcar-3975.1.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 00:23:30.419967 waagent[1884]: 2024-07-02T00:23:30.419880Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:23:30.420066 waagent[1884]: 2024-07-02T00:23:30.420032Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:23:30.428659 waagent[1884]: 2024-07-02T00:23:30.428578Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 00:23:30.434411 waagent[1884]: 2024-07-02T00:23:30.434362Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 00:23:30.434959 waagent[1884]: 2024-07-02T00:23:30.434912Z INFO ExtHandler Jul 2 00:23:30.435031 waagent[1884]: 2024-07-02T00:23:30.435000Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 73c38202-786f-438e-a758-a7eafcbaac3f eTag: 13712502944570608559 source: Fabric] Jul 2 00:23:30.435321 waagent[1884]: 2024-07-02T00:23:30.435281Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 00:23:30.460744 waagent[1884]: 2024-07-02T00:23:30.460626Z INFO ExtHandler Jul 2 00:23:30.460898 waagent[1884]: 2024-07-02T00:23:30.460841Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 00:23:30.465414 waagent[1884]: 2024-07-02T00:23:30.465372Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 00:23:30.760126 waagent[1884]: 2024-07-02T00:23:30.759960Z INFO ExtHandler Downloaded certificate {'thumbprint': '7784C3B174F1228D410C1E83C73869537D169426', 'hasPrivateKey': False} Jul 2 00:23:30.760595 waagent[1884]: 2024-07-02T00:23:30.760523Z INFO ExtHandler Downloaded certificate {'thumbprint': '74A76E1E8250831D53BB4A51A99FBA2614D3846F', 'hasPrivateKey': True} Jul 2 00:23:30.761043 waagent[1884]: 2024-07-02T00:23:30.760998Z INFO ExtHandler Fetch goal state completed Jul 2 00:23:30.781330 waagent[1884]: 2024-07-02T00:23:30.779844Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1884 Jul 2 00:23:30.781330 waagent[1884]: 2024-07-02T00:23:30.780029Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 2 00:23:30.781779 waagent[1884]: 2024-07-02T00:23:30.781720Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.1.1', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 00:23:30.782186 waagent[1884]: 2024-07-02T00:23:30.782129Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 00:23:31.032410 waagent[1884]: 2024-07-02T00:23:31.032308Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 00:23:31.032552 waagent[1884]: 2024-07-02T00:23:31.032509Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 00:23:31.039213 waagent[1884]: 2024-07-02T00:23:31.038763Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 00:23:31.045420 systemd[1]: Reloading requested from client PID 1902 ('systemctl') (unit waagent.service)... Jul 2 00:23:31.045433 systemd[1]: Reloading... Jul 2 00:23:31.121588 zram_generator::config[1939]: No configuration found. Jul 2 00:23:31.220507 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:31.295997 systemd[1]: Reloading finished in 249 ms. Jul 2 00:23:31.320943 waagent[1884]: 2024-07-02T00:23:31.320206Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 2 00:23:31.326799 systemd[1]: Reloading requested from client PID 1987 ('systemctl') (unit waagent.service)... Jul 2 00:23:31.326813 systemd[1]: Reloading... Jul 2 00:23:31.409993 zram_generator::config[2018]: No configuration found. Jul 2 00:23:31.513281 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:31.588054 systemd[1]: Reloading finished in 260 ms. Jul 2 00:23:31.610522 waagent[1884]: 2024-07-02T00:23:31.609758Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 2 00:23:31.610522 waagent[1884]: 2024-07-02T00:23:31.609922Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 2 00:23:32.144604 waagent[1884]: 2024-07-02T00:23:32.144009Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 00:23:32.144902 waagent[1884]: 2024-07-02T00:23:32.144657Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 00:23:32.145577 waagent[1884]: 2024-07-02T00:23:32.145475Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 00:23:32.146059 waagent[1884]: 2024-07-02T00:23:32.145953Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 00:23:32.146574 waagent[1884]: 2024-07-02T00:23:32.146456Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 00:23:32.146808 waagent[1884]: 2024-07-02T00:23:32.146581Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 00:23:32.146808 waagent[1884]: 2024-07-02T00:23:32.146715Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:23:32.147185 waagent[1884]: 2024-07-02T00:23:32.147084Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 00:23:32.147391 waagent[1884]: 2024-07-02T00:23:32.147232Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 00:23:32.147510 waagent[1884]: 2024-07-02T00:23:32.147475Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 00:23:32.147569 waagent[1884]: 2024-07-02T00:23:32.147357Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:23:32.147673 waagent[1884]: 2024-07-02T00:23:32.147635Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 00:23:32.148250 waagent[1884]: 2024-07-02T00:23:32.148175Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 00:23:32.148250 waagent[1884]: 2024-07-02T00:23:32.147876Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 00:23:32.149389 waagent[1884]: 2024-07-02T00:23:32.149257Z INFO EnvHandler ExtHandler Configure routes Jul 2 00:23:32.149962 waagent[1884]: 2024-07-02T00:23:32.149836Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 00:23:32.149962 waagent[1884]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 00:23:32.149962 waagent[1884]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 00:23:32.149962 waagent[1884]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 00:23:32.149962 waagent[1884]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:23:32.149962 waagent[1884]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:23:32.149962 waagent[1884]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 00:23:32.150764 waagent[1884]: 2024-07-02T00:23:32.150690Z INFO EnvHandler ExtHandler Gateway:None Jul 2 00:23:32.153024 waagent[1884]: 2024-07-02T00:23:32.152661Z INFO EnvHandler ExtHandler Routes:None Jul 2 00:23:32.154894 waagent[1884]: 2024-07-02T00:23:32.154780Z INFO ExtHandler ExtHandler Jul 2 00:23:32.155445 waagent[1884]: 2024-07-02T00:23:32.155402Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 115baa78-a557-4db8-a223-5e9d94f497d2 correlation b7a77533-2e59-43ef-9eb9-6cd60c50d3de created: 2024-07-02T00:22:16.722839Z] Jul 2 00:23:32.156758 waagent[1884]: 2024-07-02T00:23:32.156704Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 00:23:32.158901 waagent[1884]: 2024-07-02T00:23:32.158857Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Jul 2 00:23:32.197488 waagent[1884]: 2024-07-02T00:23:32.197409Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3CFB964A-14F4-4A5F-9F9A-2F9374ED67A2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 2 00:23:32.202024 waagent[1884]: 2024-07-02T00:23:32.200541Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 00:23:32.202024 waagent[1884]: Executing ['ip', '-a', '-o', 'link']: Jul 2 00:23:32.202024 waagent[1884]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 00:23:32.202024 waagent[1884]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:c8:4b brd ff:ff:ff:ff:ff:ff Jul 2 00:23:32.202024 waagent[1884]: 3: enP23656s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:f9:c8:4b brd ff:ff:ff:ff:ff:ff\ altname enP23656p0s2 Jul 2 00:23:32.202024 waagent[1884]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 00:23:32.202024 waagent[1884]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 00:23:32.202024 waagent[1884]: 2: eth0 inet 10.200.20.12/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 00:23:32.202024 waagent[1884]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 00:23:32.202024 waagent[1884]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jul 2 00:23:32.202024 waagent[1884]: 2: eth0 inet6 fe80::20d:3aff:fef9:c84b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 00:23:32.202024 waagent[1884]: 3: enP23656s1 inet6 fe80::20d:3aff:fef9:c84b/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 00:23:32.227102 waagent[1884]: 2024-07-02T00:23:32.227014Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 2 00:23:32.227102 waagent[1884]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:23:32.227102 waagent[1884]: pkts bytes target prot opt in out source destination Jul 2 00:23:32.227102 waagent[1884]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:23:32.227102 waagent[1884]: pkts bytes target prot opt in out source destination Jul 2 00:23:32.227102 waagent[1884]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:23:32.227102 waagent[1884]: pkts bytes target prot opt in out source destination Jul 2 00:23:32.227102 waagent[1884]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 00:23:32.227102 waagent[1884]: 10 1102 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 00:23:32.227102 waagent[1884]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 00:23:32.231128 waagent[1884]: 2024-07-02T00:23:32.230507Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 00:23:32.231128 waagent[1884]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:23:32.231128 waagent[1884]: pkts bytes target prot opt in out source destination Jul 2 00:23:32.231128 waagent[1884]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:23:32.231128 waagent[1884]: pkts bytes target prot opt in out source destination Jul 2 00:23:32.231128 waagent[1884]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 00:23:32.231128 waagent[1884]: pkts bytes target prot opt in out source destination Jul 2 00:23:32.231128 waagent[1884]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 00:23:32.231128 waagent[1884]: 14 1517 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 00:23:32.231128 waagent[1884]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 00:23:32.231128 waagent[1884]: 2024-07-02T00:23:32.231009Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 00:23:36.119303 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:36.126805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:36.232239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:36.245845 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:36.794993 kubelet[2111]: E0702 00:23:36.794952 2111 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:36.798664 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:36.798791 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:46.869450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:23:46.876805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:46.975296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:46.987022 (kubelet)[2128]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:47.050618 kubelet[2128]: E0702 00:23:47.050502 2128 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:47.053270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:47.053422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:48.017202 chronyd[1661]: Selected source PHC0 Jul 2 00:23:53.526536 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:23:53.527659 systemd[1]: Started sshd@0-10.200.20.12:22-10.200.16.10:51928.service - OpenSSH per-connection server daemon (10.200.16.10:51928). Jul 2 00:23:54.119961 sshd[2137]: Accepted publickey for core from 10.200.16.10 port 51928 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:23:54.121252 sshd[2137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:54.126235 systemd-logind[1680]: New session 3 of user core. Jul 2 00:23:54.132757 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:23:54.560187 systemd[1]: Started sshd@1-10.200.20.12:22-10.200.16.10:51934.service - OpenSSH per-connection server daemon (10.200.16.10:51934). Jul 2 00:23:55.041656 sshd[2142]: Accepted publickey for core from 10.200.16.10 port 51934 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:23:55.042951 sshd[2142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:55.046897 systemd-logind[1680]: New session 4 of user core. Jul 2 00:23:55.053764 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:23:55.401458 sshd[2142]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:55.405629 systemd[1]: sshd@1-10.200.20.12:22-10.200.16.10:51934.service: Deactivated successfully. Jul 2 00:23:55.407324 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:23:55.408058 systemd-logind[1680]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:23:55.410171 systemd-logind[1680]: Removed session 4. Jul 2 00:23:55.486266 systemd[1]: Started sshd@2-10.200.20.12:22-10.200.16.10:51940.service - OpenSSH per-connection server daemon (10.200.16.10:51940). Jul 2 00:23:55.958389 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 51940 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:23:55.959728 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:55.963523 systemd-logind[1680]: New session 5 of user core. Jul 2 00:23:55.971784 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:23:56.309030 sshd[2149]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:56.313096 systemd[1]: sshd@2-10.200.20.12:22-10.200.16.10:51940.service: Deactivated successfully. Jul 2 00:23:56.314705 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:23:56.315300 systemd-logind[1680]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:23:56.316291 systemd-logind[1680]: Removed session 5. Jul 2 00:23:56.389047 systemd[1]: Started sshd@3-10.200.20.12:22-10.200.16.10:51952.service - OpenSSH per-connection server daemon (10.200.16.10:51952). Jul 2 00:23:56.830712 sshd[2156]: Accepted publickey for core from 10.200.16.10 port 51952 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:23:56.832007 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:56.836752 systemd-logind[1680]: New session 6 of user core. Jul 2 00:23:56.841775 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:23:57.084458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:23:57.090819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:57.162485 sshd[2156]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:57.181540 systemd[1]: sshd@3-10.200.20.12:22-10.200.16.10:51952.service: Deactivated successfully. Jul 2 00:23:57.184234 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:23:57.186949 systemd-logind[1680]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:23:57.189139 systemd-logind[1680]: Removed session 6. Jul 2 00:23:57.196237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:57.202812 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:57.247880 systemd[1]: Started sshd@4-10.200.20.12:22-10.200.16.10:51958.service - OpenSSH per-connection server daemon (10.200.16.10:51958). Jul 2 00:23:57.249100 kubelet[2170]: E0702 00:23:57.248501 2170 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:57.252221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:57.252490 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:57.692451 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 51958 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:23:57.693741 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:57.698008 systemd-logind[1680]: New session 7 of user core. Jul 2 00:23:57.707710 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:23:58.099728 sudo[2182]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:23:58.099979 sudo[2182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:58.130261 sudo[2182]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:58.208579 sshd[2178]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:58.212275 systemd[1]: sshd@4-10.200.20.12:22-10.200.16.10:51958.service: Deactivated successfully. Jul 2 00:23:58.213950 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:23:58.216132 systemd-logind[1680]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:23:58.217430 systemd-logind[1680]: Removed session 7. Jul 2 00:23:58.288335 systemd[1]: Started sshd@5-10.200.20.12:22-10.200.16.10:51966.service - OpenSSH per-connection server daemon (10.200.16.10:51966). Jul 2 00:23:58.730399 sshd[2187]: Accepted publickey for core from 10.200.16.10 port 51966 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:23:58.731774 sshd[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:58.735970 systemd-logind[1680]: New session 8 of user core. Jul 2 00:23:58.744706 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 00:23:58.983130 sudo[2191]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:23:58.983783 sudo[2191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:58.987195 sudo[2191]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:58.991874 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:23:58.992109 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:59.012997 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:59.014181 auditctl[2194]: No rules Jul 2 00:23:59.014503 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:23:59.014694 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:59.018072 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:59.046630 augenrules[2212]: No rules Jul 2 00:23:59.048168 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:59.050807 sudo[2190]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:59.133238 sshd[2187]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:59.135848 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:23:59.136676 systemd[1]: sshd@5-10.200.20.12:22-10.200.16.10:51966.service: Deactivated successfully. Jul 2 00:23:59.139306 systemd-logind[1680]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:23:59.140338 systemd-logind[1680]: Removed session 8. Jul 2 00:23:59.222040 systemd[1]: Started sshd@6-10.200.20.12:22-10.200.16.10:51976.service - OpenSSH per-connection server daemon (10.200.16.10:51976). Jul 2 00:23:59.696766 sshd[2220]: Accepted publickey for core from 10.200.16.10 port 51976 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:23:59.698019 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:59.701654 systemd-logind[1680]: New session 9 of user core. Jul 2 00:23:59.708723 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 00:23:59.966378 sudo[2223]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:23:59.966649 sudo[2223]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:24:00.361807 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 00:24:00.362431 (dockerd)[2232]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 2 00:24:01.089183 dockerd[2232]: time="2024-07-02T00:24:01.089120162Z" level=info msg="Starting up" Jul 2 00:24:01.228873 dockerd[2232]: time="2024-07-02T00:24:01.228749862Z" level=info msg="Loading containers: start." Jul 2 00:24:01.475587 kernel: Initializing XFRM netlink socket Jul 2 00:24:01.608330 systemd-networkd[1552]: docker0: Link UP Jul 2 00:24:01.632497 dockerd[2232]: time="2024-07-02T00:24:01.631998933Z" level=info msg="Loading containers: done." Jul 2 00:24:01.971264 dockerd[2232]: time="2024-07-02T00:24:01.971190895Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:24:01.971436 dockerd[2232]: time="2024-07-02T00:24:01.971391894Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 00:24:01.971535 dockerd[2232]: time="2024-07-02T00:24:01.971508213Z" level=info msg="Daemon has completed initialization" Jul 2 00:24:02.017235 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 00:24:02.017896 dockerd[2232]: time="2024-07-02T00:24:02.017265463Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:24:03.407365 containerd[1736]: time="2024-07-02T00:24:03.407257241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:24:04.485956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108283360.mount: Deactivated successfully. Jul 2 00:24:06.981606 containerd[1736]: time="2024-07-02T00:24:06.980786990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:06.983343 containerd[1736]: time="2024-07-02T00:24:06.983117263Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940430" Jul 2 00:24:06.987831 containerd[1736]: time="2024-07-02T00:24:06.987793850Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:06.994788 containerd[1736]: time="2024-07-02T00:24:06.994704790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:06.996653 containerd[1736]: time="2024-07-02T00:24:06.995915826Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 3.588618985s" Jul 2 00:24:06.996653 containerd[1736]: time="2024-07-02T00:24:06.995959786Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 2 00:24:07.016448 containerd[1736]: time="2024-07-02T00:24:07.016387007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:24:07.369212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 00:24:07.376755 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:07.477737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:07.482018 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:07.520455 kubelet[2426]: E0702 00:24:07.520398 2426 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:07.522483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:07.522629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:08.698117 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jul 2 00:24:09.264749 update_engine[1684]: I0702 00:24:09.264117 1684 update_attempter.cc:509] Updating boot flags... Jul 2 00:24:09.466593 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2446) Jul 2 00:24:09.555618 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2447) Jul 2 00:24:10.077187 containerd[1736]: time="2024-07-02T00:24:10.077128257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:10.080116 containerd[1736]: time="2024-07-02T00:24:10.079879449Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881371" Jul 2 00:24:10.084590 containerd[1736]: time="2024-07-02T00:24:10.084547035Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:10.090465 containerd[1736]: time="2024-07-02T00:24:10.090386818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:10.093839 containerd[1736]: time="2024-07-02T00:24:10.092035054Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 3.075599047s" Jul 2 00:24:10.093839 containerd[1736]: time="2024-07-02T00:24:10.092080813Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 2 00:24:10.117028 containerd[1736]: time="2024-07-02T00:24:10.116985222Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:24:11.864295 containerd[1736]: time="2024-07-02T00:24:11.864240490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:11.866419 containerd[1736]: time="2024-07-02T00:24:11.866374964Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155688" Jul 2 00:24:11.869924 containerd[1736]: time="2024-07-02T00:24:11.869857912Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:11.875082 containerd[1736]: time="2024-07-02T00:24:11.875003456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:11.876276 containerd[1736]: time="2024-07-02T00:24:11.876139412Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 1.75910895s" Jul 2 00:24:11.876276 containerd[1736]: time="2024-07-02T00:24:11.876185452Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 2 00:24:11.898380 containerd[1736]: time="2024-07-02T00:24:11.898306101Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:24:13.519360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248380699.mount: Deactivated successfully. Jul 2 00:24:13.796270 containerd[1736]: time="2024-07-02T00:24:13.796135360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:13.799226 containerd[1736]: time="2024-07-02T00:24:13.798978353Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634092" Jul 2 00:24:13.805630 containerd[1736]: time="2024-07-02T00:24:13.805588417Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:13.813588 containerd[1736]: time="2024-07-02T00:24:13.812732560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:13.813588 containerd[1736]: time="2024-07-02T00:24:13.813412958Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 1.915059777s" Jul 2 00:24:13.813588 containerd[1736]: time="2024-07-02T00:24:13.813458478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 2 00:24:13.835132 containerd[1736]: time="2024-07-02T00:24:13.835085705Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:24:14.683864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171984243.mount: Deactivated successfully. Jul 2 00:24:16.305604 containerd[1736]: time="2024-07-02T00:24:16.305242939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.307150 containerd[1736]: time="2024-07-02T00:24:16.307100495Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jul 2 00:24:16.312266 containerd[1736]: time="2024-07-02T00:24:16.310790646Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.319600 containerd[1736]: time="2024-07-02T00:24:16.319541745Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.320774 containerd[1736]: time="2024-07-02T00:24:16.320728942Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.485597757s" Jul 2 00:24:16.320774 containerd[1736]: time="2024-07-02T00:24:16.320774902Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 00:24:16.341735 containerd[1736]: time="2024-07-02T00:24:16.341684931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:24:16.959481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579624480.mount: Deactivated successfully. Jul 2 00:24:16.985599 containerd[1736]: time="2024-07-02T00:24:16.985221406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.987288 containerd[1736]: time="2024-07-02T00:24:16.987244081Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jul 2 00:24:16.991382 containerd[1736]: time="2024-07-02T00:24:16.991335191Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.996324 containerd[1736]: time="2024-07-02T00:24:16.996264899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.997137 containerd[1736]: time="2024-07-02T00:24:16.997012297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 655.278486ms" Jul 2 00:24:16.997137 containerd[1736]: time="2024-07-02T00:24:16.997045697Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:24:17.016757 containerd[1736]: time="2024-07-02T00:24:17.016705289Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:24:17.619324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 00:24:17.626787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:17.725312 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:17.734837 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:17.776414 kubelet[2597]: E0702 00:24:17.776311 2597 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:17.779459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:17.779640 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:18.084601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210752565.mount: Deactivated successfully. Jul 2 00:24:23.989593 containerd[1736]: time="2024-07-02T00:24:23.988969350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:23.991356 containerd[1736]: time="2024-07-02T00:24:23.991316143Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jul 2 00:24:23.994777 containerd[1736]: time="2024-07-02T00:24:23.994723732Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.000719 containerd[1736]: time="2024-07-02T00:24:24.000652514Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.002217 containerd[1736]: time="2024-07-02T00:24:24.001777750Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 6.985034661s" Jul 2 00:24:24.002217 containerd[1736]: time="2024-07-02T00:24:24.001816470Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 2 00:24:27.869248 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 00:24:27.878160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:28.118699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:28.124436 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:24:28.169567 kubelet[2718]: E0702 00:24:28.167736 2718 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:24:28.170269 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:24:28.170410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:24:28.770850 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:28.776840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:28.802165 systemd[1]: Reloading requested from client PID 2732 ('systemctl') (unit session-9.scope)... Jul 2 00:24:28.802290 systemd[1]: Reloading... Jul 2 00:24:28.928606 zram_generator::config[2772]: No configuration found. Jul 2 00:24:29.030872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:24:29.108530 systemd[1]: Reloading finished in 305 ms. Jul 2 00:24:29.516858 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:24:29.516943 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:24:29.517225 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:29.535741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:30.587460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:30.596893 (kubelet)[2835]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:24:30.638063 kubelet[2835]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:30.638063 kubelet[2835]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:24:30.638063 kubelet[2835]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:30.638411 kubelet[2835]: I0702 00:24:30.638144 2835 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:24:31.845575 kubelet[2835]: I0702 00:24:31.845456 2835 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:24:31.845575 kubelet[2835]: I0702 00:24:31.845488 2835 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:24:31.845956 kubelet[2835]: I0702 00:24:31.845691 2835 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:24:31.856673 kubelet[2835]: E0702 00:24:31.856610 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.858390 kubelet[2835]: I0702 00:24:31.858265 2835 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:31.869192 kubelet[2835]: I0702 00:24:31.869158 2835 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:24:31.869400 kubelet[2835]: I0702 00:24:31.869368 2835 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:24:31.869595 kubelet[2835]: I0702 00:24:31.869397 2835 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-a-3e8d94ffa6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:24:31.869692 kubelet[2835]: I0702 00:24:31.869600 2835 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:24:31.869692 kubelet[2835]: I0702 00:24:31.869611 2835 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:24:31.869759 kubelet[2835]: I0702 00:24:31.869737 2835 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:31.870530 kubelet[2835]: I0702 00:24:31.870507 2835 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:24:31.870580 kubelet[2835]: I0702 00:24:31.870534 2835 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:24:31.870612 kubelet[2835]: I0702 00:24:31.870585 2835 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:24:31.870612 kubelet[2835]: I0702 00:24:31.870607 2835 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:24:31.872644 kubelet[2835]: W0702 00:24:31.871686 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-3e8d94ffa6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.872644 kubelet[2835]: E0702 00:24:31.871751 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-3e8d94ffa6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.872644 kubelet[2835]: W0702 00:24:31.871804 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.872644 kubelet[2835]: E0702 00:24:31.871828 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.873004 kubelet[2835]: I0702 00:24:31.872988 2835 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:24:31.873796 kubelet[2835]: I0702 00:24:31.873211 2835 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:24:31.873796 kubelet[2835]: W0702 00:24:31.873257 2835 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:24:31.873796 kubelet[2835]: I0702 00:24:31.873789 2835 server.go:1264] "Started kubelet" Jul 2 00:24:31.877414 kubelet[2835]: E0702 00:24:31.877246 2835 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.12:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.12:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3975.1.1-a-3e8d94ffa6.17de3da193615e5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-3e8d94ffa6,UID:ci-3975.1.1-a-3e8d94ffa6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-3e8d94ffa6,},FirstTimestamp:2024-07-02 00:24:31.873769054 +0000 UTC m=+1.273315227,LastTimestamp:2024-07-02 00:24:31.873769054 +0000 UTC m=+1.273315227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-3e8d94ffa6,}" Jul 2 00:24:31.878969 kubelet[2835]: I0702 00:24:31.878792 2835 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:24:31.882076 kubelet[2835]: I0702 00:24:31.881920 2835 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:24:31.883539 kubelet[2835]: I0702 00:24:31.883505 2835 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:24:31.884579 kubelet[2835]: I0702 00:24:31.883945 2835 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:24:31.884972 kubelet[2835]: I0702 00:24:31.884921 2835 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:24:31.885238 kubelet[2835]: I0702 00:24:31.885220 2835 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:24:31.885612 kubelet[2835]: E0702 00:24:31.885590 2835 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:24:31.886451 kubelet[2835]: I0702 00:24:31.886428 2835 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:24:31.886651 kubelet[2835]: I0702 00:24:31.886630 2835 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:24:31.887127 kubelet[2835]: I0702 00:24:31.887090 2835 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:24:31.887348 kubelet[2835]: E0702 00:24:31.887322 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-3e8d94ffa6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="200ms" Jul 2 00:24:31.888169 kubelet[2835]: I0702 00:24:31.888143 2835 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:24:31.889315 kubelet[2835]: W0702 00:24:31.889269 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.889430 kubelet[2835]: E0702 00:24:31.889417 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.889682 kubelet[2835]: I0702 00:24:31.889662 2835 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:24:31.894854 kubelet[2835]: I0702 00:24:31.894802 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:24:31.898042 kubelet[2835]: I0702 00:24:31.895820 2835 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:24:31.898042 kubelet[2835]: I0702 00:24:31.895860 2835 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:24:31.898042 kubelet[2835]: I0702 00:24:31.895880 2835 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:24:31.898042 kubelet[2835]: E0702 00:24:31.895921 2835 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:24:31.902492 kubelet[2835]: W0702 00:24:31.902433 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.902492 kubelet[2835]: E0702 00:24:31.902491 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:31.996320 kubelet[2835]: E0702 00:24:31.996256 2835 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 00:24:32.021809 kubelet[2835]: I0702 00:24:32.021336 2835 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.021809 kubelet[2835]: E0702 00:24:32.021755 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.022364 kubelet[2835]: I0702 00:24:32.022336 2835 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:24:32.022364 kubelet[2835]: I0702 00:24:32.022351 2835 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:24:32.022457 kubelet[2835]: I0702 00:24:32.022373 2835 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:32.027734 kubelet[2835]: I0702 00:24:32.027705 2835 policy_none.go:49] "None policy: Start" Jul 2 00:24:32.028399 kubelet[2835]: I0702 00:24:32.028377 2835 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:24:32.028898 kubelet[2835]: I0702 00:24:32.028589 2835 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:24:32.037372 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:24:32.051977 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:24:32.055384 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:24:32.065334 kubelet[2835]: I0702 00:24:32.065301 2835 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:24:32.065544 kubelet[2835]: I0702 00:24:32.065496 2835 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:24:32.065654 kubelet[2835]: I0702 00:24:32.065636 2835 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:24:32.067155 kubelet[2835]: E0702 00:24:32.067131 2835 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.1.1-a-3e8d94ffa6\" not found" Jul 2 00:24:32.087891 kubelet[2835]: E0702 00:24:32.087843 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-3e8d94ffa6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="400ms" Jul 2 00:24:32.196652 kubelet[2835]: I0702 00:24:32.196509 2835 topology_manager.go:215] "Topology Admit Handler" podUID="d5c00fdefb11d5803ececf67858f2343" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.198282 kubelet[2835]: I0702 00:24:32.198198 2835 topology_manager.go:215] "Topology Admit Handler" podUID="3dab7bb20b9b779535ed246568dbcd47" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.200177 kubelet[2835]: I0702 00:24:32.200128 2835 topology_manager.go:215] "Topology Admit Handler" podUID="4902171735557bae60f94eac8a3b7f33" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.207909 systemd[1]: Created slice kubepods-burstable-podd5c00fdefb11d5803ececf67858f2343.slice - libcontainer container kubepods-burstable-podd5c00fdefb11d5803ececf67858f2343.slice. Jul 2 00:24:32.225206 kubelet[2835]: I0702 00:24:32.225176 2835 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.225869 kubelet[2835]: E0702 00:24:32.225513 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.229450 systemd[1]: Created slice kubepods-burstable-pod3dab7bb20b9b779535ed246568dbcd47.slice - libcontainer container kubepods-burstable-pod3dab7bb20b9b779535ed246568dbcd47.slice. Jul 2 00:24:32.234300 systemd[1]: Created slice kubepods-burstable-pod4902171735557bae60f94eac8a3b7f33.slice - libcontainer container kubepods-burstable-pod4902171735557bae60f94eac8a3b7f33.slice. Jul 2 00:24:32.290507 kubelet[2835]: I0702 00:24:32.290466 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dab7bb20b9b779535ed246568dbcd47-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"3dab7bb20b9b779535ed246568dbcd47\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290507 kubelet[2835]: I0702 00:24:32.290504 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dab7bb20b9b779535ed246568dbcd47-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"3dab7bb20b9b779535ed246568dbcd47\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290774 kubelet[2835]: I0702 00:24:32.290529 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290774 kubelet[2835]: I0702 00:24:32.290544 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290774 kubelet[2835]: I0702 00:24:32.290580 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5c00fdefb11d5803ececf67858f2343-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"d5c00fdefb11d5803ececf67858f2343\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290774 kubelet[2835]: I0702 00:24:32.290597 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dab7bb20b9b779535ed246568dbcd47-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"3dab7bb20b9b779535ed246568dbcd47\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290774 kubelet[2835]: I0702 00:24:32.290612 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290893 kubelet[2835]: I0702 00:24:32.290629 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.290893 kubelet[2835]: I0702 00:24:32.290653 2835 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.488638 kubelet[2835]: E0702 00:24:32.488523 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-3e8d94ffa6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="800ms" Jul 2 00:24:32.527654 containerd[1736]: time="2024-07-02T00:24:32.527455323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-3e8d94ffa6,Uid:d5c00fdefb11d5803ececf67858f2343,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:32.533269 containerd[1736]: time="2024-07-02T00:24:32.533109308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-3e8d94ffa6,Uid:3dab7bb20b9b779535ed246568dbcd47,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:32.537943 containerd[1736]: time="2024-07-02T00:24:32.537734495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6,Uid:4902171735557bae60f94eac8a3b7f33,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:32.628271 kubelet[2835]: I0702 00:24:32.628155 2835 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.628737 kubelet[2835]: E0702 00:24:32.628671 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:32.777026 kubelet[2835]: W0702 00:24:32.775928 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-3e8d94ffa6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:32.777026 kubelet[2835]: E0702 00:24:32.775999 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.1.1-a-3e8d94ffa6&limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:32.913173 kubelet[2835]: W0702 00:24:32.913077 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:32.913173 kubelet[2835]: E0702 00:24:32.913149 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.12:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:32.947751 kubelet[2835]: W0702 00:24:32.947689 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:32.947751 kubelet[2835]: E0702 00:24:32.947756 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:33.190725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3701893278.mount: Deactivated successfully. Jul 2 00:24:33.215602 containerd[1736]: time="2024-07-02T00:24:33.215212700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:33.225257 containerd[1736]: time="2024-07-02T00:24:33.225216754Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jul 2 00:24:33.229764 containerd[1736]: time="2024-07-02T00:24:33.229722742Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:33.234571 containerd[1736]: time="2024-07-02T00:24:33.233537372Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:33.238033 containerd[1736]: time="2024-07-02T00:24:33.237997480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:33.241248 containerd[1736]: time="2024-07-02T00:24:33.241140832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:24:33.246550 containerd[1736]: time="2024-07-02T00:24:33.246509778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:24:33.249814 containerd[1736]: time="2024-07-02T00:24:33.249765169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:24:33.250815 containerd[1736]: time="2024-07-02T00:24:33.250580327Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 723.002205ms" Jul 2 00:24:33.252873 containerd[1736]: time="2024-07-02T00:24:33.252829681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 719.621214ms" Jul 2 00:24:33.271096 containerd[1736]: time="2024-07-02T00:24:33.271040793Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 733.211098ms" Jul 2 00:24:33.289308 kubelet[2835]: E0702 00:24:33.289255 2835 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-3e8d94ffa6?timeout=10s\": dial tcp 10.200.20.12:6443: connect: connection refused" interval="1.6s" Jul 2 00:24:33.404644 kubelet[2835]: W0702 00:24:33.404580 2835 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:33.404644 kubelet[2835]: E0702 00:24:33.404646 2835 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:33.431157 kubelet[2835]: I0702 00:24:33.431134 2835 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:33.431638 kubelet[2835]: E0702 00:24:33.431606 2835 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.12:6443/api/v1/nodes\": dial tcp 10.200.20.12:6443: connect: connection refused" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:33.911846 kubelet[2835]: E0702 00:24:33.911808 2835 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.12:6443: connect: connection refused Jul 2 00:24:34.066909 containerd[1736]: time="2024-07-02T00:24:34.066779764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:34.066909 containerd[1736]: time="2024-07-02T00:24:34.066845524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:34.066909 containerd[1736]: time="2024-07-02T00:24:34.066860004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:34.067410 containerd[1736]: time="2024-07-02T00:24:34.066869324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:34.068270 containerd[1736]: time="2024-07-02T00:24:34.067745802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:34.068685 containerd[1736]: time="2024-07-02T00:24:34.068484240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:34.068685 containerd[1736]: time="2024-07-02T00:24:34.068550840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:34.068685 containerd[1736]: time="2024-07-02T00:24:34.068590920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:34.068685 containerd[1736]: time="2024-07-02T00:24:34.068610559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:34.068874 containerd[1736]: time="2024-07-02T00:24:34.068365320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:34.068874 containerd[1736]: time="2024-07-02T00:24:34.068390080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:34.068874 containerd[1736]: time="2024-07-02T00:24:34.068400880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:34.088800 systemd[1]: Started cri-containerd-047d87f75bba692bbf26f38a185afe11d3d890019efa9e25855ed4b5ab7b053d.scope - libcontainer container 047d87f75bba692bbf26f38a185afe11d3d890019efa9e25855ed4b5ab7b053d. Jul 2 00:24:34.103747 systemd[1]: Started cri-containerd-10bc95d076f644e97ad3bdfc31d90d8f80a71489b9ba69be81d7acef2bced655.scope - libcontainer container 10bc95d076f644e97ad3bdfc31d90d8f80a71489b9ba69be81d7acef2bced655. Jul 2 00:24:34.105176 systemd[1]: Started cri-containerd-8a1a576fa8ccaffeae75290a8dd8d21cf79b0964c51cbce800d1f6175ed8a677.scope - libcontainer container 8a1a576fa8ccaffeae75290a8dd8d21cf79b0964c51cbce800d1f6175ed8a677. Jul 2 00:24:34.154959 containerd[1736]: time="2024-07-02T00:24:34.154814131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.1.1-a-3e8d94ffa6,Uid:3dab7bb20b9b779535ed246568dbcd47,Namespace:kube-system,Attempt:0,} returns sandbox id \"047d87f75bba692bbf26f38a185afe11d3d890019efa9e25855ed4b5ab7b053d\"" Jul 2 00:24:34.159822 containerd[1736]: time="2024-07-02T00:24:34.159174920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6,Uid:4902171735557bae60f94eac8a3b7f33,Namespace:kube-system,Attempt:0,} returns sandbox id \"10bc95d076f644e97ad3bdfc31d90d8f80a71489b9ba69be81d7acef2bced655\"" Jul 2 00:24:34.160946 containerd[1736]: time="2024-07-02T00:24:34.160474676Z" level=info msg="CreateContainer within sandbox \"047d87f75bba692bbf26f38a185afe11d3d890019efa9e25855ed4b5ab7b053d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:24:34.162713 containerd[1736]: time="2024-07-02T00:24:34.162617510Z" level=info msg="CreateContainer within sandbox \"10bc95d076f644e97ad3bdfc31d90d8f80a71489b9ba69be81d7acef2bced655\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:24:34.166040 containerd[1736]: time="2024-07-02T00:24:34.166009101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.1.1-a-3e8d94ffa6,Uid:d5c00fdefb11d5803ececf67858f2343,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a1a576fa8ccaffeae75290a8dd8d21cf79b0964c51cbce800d1f6175ed8a677\"" Jul 2 00:24:34.169310 containerd[1736]: time="2024-07-02T00:24:34.169194333Z" level=info msg="CreateContainer within sandbox \"8a1a576fa8ccaffeae75290a8dd8d21cf79b0964c51cbce800d1f6175ed8a677\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:24:34.205299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2683194993.mount: Deactivated successfully. Jul 2 00:24:34.231194 containerd[1736]: time="2024-07-02T00:24:34.231143329Z" level=info msg="CreateContainer within sandbox \"047d87f75bba692bbf26f38a185afe11d3d890019efa9e25855ed4b5ab7b053d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d7d7b9f711b4cd7c9a60cdee0e0fd9128d55707eee4a820ef3b8735e953ffbb\"" Jul 2 00:24:34.233266 containerd[1736]: time="2024-07-02T00:24:34.231950927Z" level=info msg="StartContainer for \"6d7d7b9f711b4cd7c9a60cdee0e0fd9128d55707eee4a820ef3b8735e953ffbb\"" Jul 2 00:24:34.235699 containerd[1736]: time="2024-07-02T00:24:34.235648597Z" level=info msg="CreateContainer within sandbox \"8a1a576fa8ccaffeae75290a8dd8d21cf79b0964c51cbce800d1f6175ed8a677\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd\"" Jul 2 00:24:34.236272 containerd[1736]: time="2024-07-02T00:24:34.236224795Z" level=info msg="StartContainer for \"3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd\"" Jul 2 00:24:34.239800 containerd[1736]: time="2024-07-02T00:24:34.239760866Z" level=info msg="CreateContainer within sandbox \"10bc95d076f644e97ad3bdfc31d90d8f80a71489b9ba69be81d7acef2bced655\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38\"" Jul 2 00:24:34.240755 containerd[1736]: time="2024-07-02T00:24:34.240717344Z" level=info msg="StartContainer for \"fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38\"" Jul 2 00:24:34.270919 systemd[1]: Started cri-containerd-3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd.scope - libcontainer container 3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd. Jul 2 00:24:34.272489 systemd[1]: Started cri-containerd-6d7d7b9f711b4cd7c9a60cdee0e0fd9128d55707eee4a820ef3b8735e953ffbb.scope - libcontainer container 6d7d7b9f711b4cd7c9a60cdee0e0fd9128d55707eee4a820ef3b8735e953ffbb. Jul 2 00:24:34.288732 systemd[1]: Started cri-containerd-fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38.scope - libcontainer container fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38. Jul 2 00:24:34.342593 containerd[1736]: time="2024-07-02T00:24:34.342363394Z" level=info msg="StartContainer for \"6d7d7b9f711b4cd7c9a60cdee0e0fd9128d55707eee4a820ef3b8735e953ffbb\" returns successfully" Jul 2 00:24:34.355128 containerd[1736]: time="2024-07-02T00:24:34.355072641Z" level=info msg="StartContainer for \"fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38\" returns successfully" Jul 2 00:24:34.355350 containerd[1736]: time="2024-07-02T00:24:34.355326240Z" level=info msg="StartContainer for \"3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd\" returns successfully" Jul 2 00:24:35.033741 kubelet[2835]: I0702 00:24:35.033709 2835 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:35.994615 kubelet[2835]: E0702 00:24:35.994567 2835 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.1.1-a-3e8d94ffa6\" not found" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:36.031693 kubelet[2835]: E0702 00:24:36.031410 2835 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975.1.1-a-3e8d94ffa6.17de3da193615e5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-3e8d94ffa6,UID:ci-3975.1.1-a-3e8d94ffa6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-3e8d94ffa6,},FirstTimestamp:2024-07-02 00:24:31.873769054 +0000 UTC m=+1.273315227,LastTimestamp:2024-07-02 00:24:31.873769054 +0000 UTC m=+1.273315227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-3e8d94ffa6,}" Jul 2 00:24:36.064260 kubelet[2835]: I0702 00:24:36.064065 2835 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:36.142939 kubelet[2835]: E0702 00:24:36.142833 2835 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975.1.1-a-3e8d94ffa6.17de3da194158c2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-3e8d94ffa6,UID:ci-3975.1.1-a-3e8d94ffa6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-3e8d94ffa6,},FirstTimestamp:2024-07-02 00:24:31.885577263 +0000 UTC m=+1.285123436,LastTimestamp:2024-07-02 00:24:31.885577263 +0000 UTC m=+1.285123436,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-3e8d94ffa6,}" Jul 2 00:24:36.223262 kubelet[2835]: E0702 00:24:36.222989 2835 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975.1.1-a-3e8d94ffa6.17de3da19c2c6ba0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-3e8d94ffa6,UID:ci-3975.1.1-a-3e8d94ffa6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-3975.1.1-a-3e8d94ffa6 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-3e8d94ffa6,},FirstTimestamp:2024-07-02 00:24:32.021293984 +0000 UTC m=+1.420840157,LastTimestamp:2024-07-02 00:24:32.021293984 +0000 UTC m=+1.420840157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-3e8d94ffa6,}" Jul 2 00:24:36.315523 kubelet[2835]: E0702 00:24:36.315294 2835 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3975.1.1-a-3e8d94ffa6.17de3da19c2c7d98 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3975.1.1-a-3e8d94ffa6,UID:ci-3975.1.1-a-3e8d94ffa6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-3975.1.1-a-3e8d94ffa6 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-3e8d94ffa6,},FirstTimestamp:2024-07-02 00:24:32.021298584 +0000 UTC m=+1.420844717,LastTimestamp:2024-07-02 00:24:32.021298584 +0000 UTC m=+1.420844717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-3e8d94ffa6,}" Jul 2 00:24:36.875443 kubelet[2835]: I0702 00:24:36.875396 2835 apiserver.go:52] "Watching apiserver" Jul 2 00:24:36.888061 kubelet[2835]: I0702 00:24:36.888024 2835 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:24:38.027498 systemd[1]: Reloading requested from client PID 3110 ('systemctl') (unit session-9.scope)... Jul 2 00:24:38.027519 systemd[1]: Reloading... Jul 2 00:24:38.125871 zram_generator::config[3150]: No configuration found. Jul 2 00:24:38.223058 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:24:38.313016 systemd[1]: Reloading finished in 285 ms. Jul 2 00:24:38.354452 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:38.372756 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:24:38.373003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:38.373061 systemd[1]: kubelet.service: Consumed 1.629s CPU time, 113.7M memory peak, 0B memory swap peak. Jul 2 00:24:38.380897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:24:38.566645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:24:38.577946 (kubelet)[3211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:24:38.639130 kubelet[3211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:38.640006 kubelet[3211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:24:38.640006 kubelet[3211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:24:38.640006 kubelet[3211]: I0702 00:24:38.639542 3211 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:24:38.644070 kubelet[3211]: I0702 00:24:38.644033 3211 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:24:38.644070 kubelet[3211]: I0702 00:24:38.644063 3211 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:24:38.644275 kubelet[3211]: I0702 00:24:38.644255 3211 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:24:38.645701 kubelet[3211]: I0702 00:24:38.645665 3211 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:24:38.647075 kubelet[3211]: I0702 00:24:38.647047 3211 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:24:38.658007 kubelet[3211]: I0702 00:24:38.657969 3211 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:24:38.658219 kubelet[3211]: I0702 00:24:38.658183 3211 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:24:38.658392 kubelet[3211]: I0702 00:24:38.658219 3211 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3975.1.1-a-3e8d94ffa6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:24:38.658474 kubelet[3211]: I0702 00:24:38.658396 3211 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:24:38.658474 kubelet[3211]: I0702 00:24:38.658405 3211 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:24:38.658474 kubelet[3211]: I0702 00:24:38.658436 3211 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:38.658573 kubelet[3211]: I0702 00:24:38.658543 3211 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:24:38.659015 kubelet[3211]: I0702 00:24:38.658995 3211 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:24:38.659047 kubelet[3211]: I0702 00:24:38.659040 3211 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:24:38.659179 kubelet[3211]: I0702 00:24:38.659054 3211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:24:38.664767 kubelet[3211]: I0702 00:24:38.664620 3211 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:24:38.664876 kubelet[3211]: I0702 00:24:38.664790 3211 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:24:38.665194 kubelet[3211]: I0702 00:24:38.665170 3211 server.go:1264] "Started kubelet" Jul 2 00:24:38.667034 kubelet[3211]: I0702 00:24:38.667001 3211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:24:38.677658 kubelet[3211]: I0702 00:24:38.676602 3211 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:24:38.680051 kubelet[3211]: I0702 00:24:38.680014 3211 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:24:38.681563 kubelet[3211]: I0702 00:24:38.680900 3211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:24:38.681563 kubelet[3211]: I0702 00:24:38.681117 3211 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:24:38.684670 kubelet[3211]: I0702 00:24:38.684632 3211 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:24:38.688495 kubelet[3211]: I0702 00:24:38.688455 3211 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:24:38.688669 kubelet[3211]: I0702 00:24:38.688651 3211 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:24:38.694263 kubelet[3211]: I0702 00:24:38.691130 3211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:24:38.694263 kubelet[3211]: I0702 00:24:38.692156 3211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:24:38.694263 kubelet[3211]: I0702 00:24:38.692189 3211 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:24:38.694263 kubelet[3211]: I0702 00:24:38.692205 3211 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:24:38.694263 kubelet[3211]: E0702 00:24:38.692254 3211 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:24:38.702474 kubelet[3211]: I0702 00:24:38.701286 3211 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:24:38.702770 kubelet[3211]: I0702 00:24:38.702741 3211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:24:38.706071 kubelet[3211]: E0702 00:24:38.705321 3211 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:24:38.706898 kubelet[3211]: I0702 00:24:38.706864 3211 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:24:38.750316 kubelet[3211]: I0702 00:24:38.750049 3211 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:24:38.750316 kubelet[3211]: I0702 00:24:38.750068 3211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:24:38.750316 kubelet[3211]: I0702 00:24:38.750089 3211 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:24:38.750316 kubelet[3211]: I0702 00:24:38.750248 3211 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:24:38.750316 kubelet[3211]: I0702 00:24:38.750258 3211 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:24:38.750316 kubelet[3211]: I0702 00:24:38.750285 3211 policy_none.go:49] "None policy: Start" Jul 2 00:24:38.751001 kubelet[3211]: I0702 00:24:38.750971 3211 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:24:38.751001 kubelet[3211]: I0702 00:24:38.750999 3211 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:24:38.751165 kubelet[3211]: I0702 00:24:38.751144 3211 state_mem.go:75] "Updated machine memory state" Jul 2 00:24:38.755451 kubelet[3211]: I0702 00:24:38.755411 3211 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:24:38.756016 kubelet[3211]: I0702 00:24:38.755622 3211 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:24:38.756642 kubelet[3211]: I0702 00:24:38.756398 3211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:24:38.787853 kubelet[3211]: I0702 00:24:38.787664 3211 kubelet_node_status.go:73] "Attempting to register node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.793085 kubelet[3211]: I0702 00:24:38.792693 3211 topology_manager.go:215] "Topology Admit Handler" podUID="3dab7bb20b9b779535ed246568dbcd47" podNamespace="kube-system" podName="kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.793085 kubelet[3211]: I0702 00:24:38.792817 3211 topology_manager.go:215] "Topology Admit Handler" podUID="4902171735557bae60f94eac8a3b7f33" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.793085 kubelet[3211]: I0702 00:24:38.792854 3211 topology_manager.go:215] "Topology Admit Handler" podUID="d5c00fdefb11d5803ececf67858f2343" podNamespace="kube-system" podName="kube-scheduler-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.800947 kubelet[3211]: I0702 00:24:38.800898 3211 kubelet_node_status.go:112] "Node was previously registered" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.801084 kubelet[3211]: I0702 00:24:38.800998 3211 kubelet_node_status.go:76] "Successfully registered node" node="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.811028 kubelet[3211]: W0702 00:24:38.810992 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:38.814580 kubelet[3211]: W0702 00:24:38.813032 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:38.814580 kubelet[3211]: W0702 00:24:38.813491 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:38.889642 kubelet[3211]: I0702 00:24:38.889592 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5c00fdefb11d5803ececf67858f2343-kubeconfig\") pod \"kube-scheduler-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"d5c00fdefb11d5803ececf67858f2343\") " pod="kube-system/kube-scheduler-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.889938 kubelet[3211]: I0702 00:24:38.889666 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3dab7bb20b9b779535ed246568dbcd47-ca-certs\") pod \"kube-apiserver-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"3dab7bb20b9b779535ed246568dbcd47\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.889938 kubelet[3211]: I0702 00:24:38.889690 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-ca-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.889938 kubelet[3211]: I0702 00:24:38.889708 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-k8s-certs\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.889938 kubelet[3211]: I0702 00:24:38.889733 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-kubeconfig\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.889938 kubelet[3211]: I0702 00:24:38.889750 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.890066 kubelet[3211]: I0702 00:24:38.889768 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3dab7bb20b9b779535ed246568dbcd47-k8s-certs\") pod \"kube-apiserver-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"3dab7bb20b9b779535ed246568dbcd47\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.890066 kubelet[3211]: I0702 00:24:38.889789 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3dab7bb20b9b779535ed246568dbcd47-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"3dab7bb20b9b779535ed246568dbcd47\") " pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:38.890066 kubelet[3211]: I0702 00:24:38.889818 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4902171735557bae60f94eac8a3b7f33-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" (UID: \"4902171735557bae60f94eac8a3b7f33\") " pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:39.662229 kubelet[3211]: I0702 00:24:39.661871 3211 apiserver.go:52] "Watching apiserver" Jul 2 00:24:39.689121 kubelet[3211]: I0702 00:24:39.689046 3211 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:24:39.751597 kubelet[3211]: W0702 00:24:39.751402 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:39.752577 kubelet[3211]: E0702 00:24:39.752253 3211 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3975.1.1-a-3e8d94ffa6\" already exists" pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:39.760534 kubelet[3211]: W0702 00:24:39.760372 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 00:24:39.760534 kubelet[3211]: E0702 00:24:39.760448 3211 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6\" already exists" pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:24:39.777220 kubelet[3211]: I0702 00:24:39.776352 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.1.1-a-3e8d94ffa6" podStartSLOduration=1.776320053 podStartE2EDuration="1.776320053s" podCreationTimestamp="2024-07-02 00:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:39.775808454 +0000 UTC m=+1.193820904" watchObservedRunningTime="2024-07-02 00:24:39.776320053 +0000 UTC m=+1.194332503" Jul 2 00:24:39.814034 kubelet[3211]: I0702 00:24:39.813066 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.1.1-a-3e8d94ffa6" podStartSLOduration=1.813047326 podStartE2EDuration="1.813047326s" podCreationTimestamp="2024-07-02 00:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:39.788734063 +0000 UTC m=+1.206746513" watchObservedRunningTime="2024-07-02 00:24:39.813047326 +0000 UTC m=+1.231059776" Jul 2 00:24:43.779153 sudo[2223]: pam_unix(sudo:session): session closed for user root Jul 2 00:24:43.867892 sshd[2220]: pam_unix(sshd:session): session closed for user core Jul 2 00:24:43.870660 systemd[1]: sshd@6-10.200.20.12:22-10.200.16.10:51976.service: Deactivated successfully. Jul 2 00:24:43.873155 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:24:43.873465 systemd[1]: session-9.scope: Consumed 6.305s CPU time, 131.6M memory peak, 0B memory swap peak. Jul 2 00:24:43.874990 systemd-logind[1680]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:24:43.876446 systemd-logind[1680]: Removed session 9. Jul 2 00:24:44.476077 kubelet[3211]: I0702 00:24:44.476007 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3975.1.1-a-3e8d94ffa6" podStartSLOduration=6.475989081 podStartE2EDuration="6.475989081s" podCreationTimestamp="2024-07-02 00:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:39.816527478 +0000 UTC m=+1.234539928" watchObservedRunningTime="2024-07-02 00:24:44.475989081 +0000 UTC m=+5.894001531" Jul 2 00:24:53.146438 kubelet[3211]: I0702 00:24:53.146348 3211 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:24:53.147126 containerd[1736]: time="2024-07-02T00:24:53.147005844Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:24:53.147966 kubelet[3211]: I0702 00:24:53.147239 3211 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:24:54.129037 kubelet[3211]: I0702 00:24:54.128922 3211 topology_manager.go:215] "Topology Admit Handler" podUID="57ff91cc-205d-46c8-8fd5-1327ed93b836" podNamespace="kube-system" podName="kube-proxy-pbvgg" Jul 2 00:24:54.138107 systemd[1]: Created slice kubepods-besteffort-pod57ff91cc_205d_46c8_8fd5_1327ed93b836.slice - libcontainer container kubepods-besteffort-pod57ff91cc_205d_46c8_8fd5_1327ed93b836.slice. Jul 2 00:24:54.189364 kubelet[3211]: I0702 00:24:54.189129 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57ff91cc-205d-46c8-8fd5-1327ed93b836-kube-proxy\") pod \"kube-proxy-pbvgg\" (UID: \"57ff91cc-205d-46c8-8fd5-1327ed93b836\") " pod="kube-system/kube-proxy-pbvgg" Jul 2 00:24:54.189364 kubelet[3211]: I0702 00:24:54.189169 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57ff91cc-205d-46c8-8fd5-1327ed93b836-xtables-lock\") pod \"kube-proxy-pbvgg\" (UID: \"57ff91cc-205d-46c8-8fd5-1327ed93b836\") " pod="kube-system/kube-proxy-pbvgg" Jul 2 00:24:54.189364 kubelet[3211]: I0702 00:24:54.189189 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57ff91cc-205d-46c8-8fd5-1327ed93b836-lib-modules\") pod \"kube-proxy-pbvgg\" (UID: \"57ff91cc-205d-46c8-8fd5-1327ed93b836\") " pod="kube-system/kube-proxy-pbvgg" Jul 2 00:24:54.189364 kubelet[3211]: I0702 00:24:54.189230 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznfc\" (UniqueName: \"kubernetes.io/projected/57ff91cc-205d-46c8-8fd5-1327ed93b836-kube-api-access-hznfc\") pod \"kube-proxy-pbvgg\" (UID: \"57ff91cc-205d-46c8-8fd5-1327ed93b836\") " pod="kube-system/kube-proxy-pbvgg" Jul 2 00:24:54.251928 kubelet[3211]: I0702 00:24:54.251784 3211 topology_manager.go:215] "Topology Admit Handler" podUID="79ebbea4-fb32-4128-a399-17a785b530cf" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-mx6lr" Jul 2 00:24:54.261451 systemd[1]: Created slice kubepods-besteffort-pod79ebbea4_fb32_4128_a399_17a785b530cf.slice - libcontainer container kubepods-besteffort-pod79ebbea4_fb32_4128_a399_17a785b530cf.slice. Jul 2 00:24:54.290066 kubelet[3211]: I0702 00:24:54.289960 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79ebbea4-fb32-4128-a399-17a785b530cf-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-mx6lr\" (UID: \"79ebbea4-fb32-4128-a399-17a785b530cf\") " pod="tigera-operator/tigera-operator-76ff79f7fd-mx6lr" Jul 2 00:24:54.290066 kubelet[3211]: I0702 00:24:54.289997 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2ctb\" (UniqueName: \"kubernetes.io/projected/79ebbea4-fb32-4128-a399-17a785b530cf-kube-api-access-l2ctb\") pod \"tigera-operator-76ff79f7fd-mx6lr\" (UID: \"79ebbea4-fb32-4128-a399-17a785b530cf\") " pod="tigera-operator/tigera-operator-76ff79f7fd-mx6lr" Jul 2 00:24:54.447589 containerd[1736]: time="2024-07-02T00:24:54.447459884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbvgg,Uid:57ff91cc-205d-46c8-8fd5-1327ed93b836,Namespace:kube-system,Attempt:0,}" Jul 2 00:24:54.494169 containerd[1736]: time="2024-07-02T00:24:54.493325577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:54.494169 containerd[1736]: time="2024-07-02T00:24:54.493812616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:54.494169 containerd[1736]: time="2024-07-02T00:24:54.493850696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:54.494169 containerd[1736]: time="2024-07-02T00:24:54.493863695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:54.516819 systemd[1]: Started cri-containerd-d69322874243a2f4d4702ede4834b6f448a3cad972ce5914791f865a767d36de.scope - libcontainer container d69322874243a2f4d4702ede4834b6f448a3cad972ce5914791f865a767d36de. Jul 2 00:24:54.538105 containerd[1736]: time="2024-07-02T00:24:54.537849595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbvgg,Uid:57ff91cc-205d-46c8-8fd5-1327ed93b836,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69322874243a2f4d4702ede4834b6f448a3cad972ce5914791f865a767d36de\"" Jul 2 00:24:54.543079 containerd[1736]: time="2024-07-02T00:24:54.542627620Z" level=info msg="CreateContainer within sandbox \"d69322874243a2f4d4702ede4834b6f448a3cad972ce5914791f865a767d36de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:24:54.565869 containerd[1736]: time="2024-07-02T00:24:54.565590146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-mx6lr,Uid:79ebbea4-fb32-4128-a399-17a785b530cf,Namespace:tigera-operator,Attempt:0,}" Jul 2 00:24:54.580684 containerd[1736]: time="2024-07-02T00:24:54.580529178Z" level=info msg="CreateContainer within sandbox \"d69322874243a2f4d4702ede4834b6f448a3cad972ce5914791f865a767d36de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"31db43ebdcb56f2ba8b47e8d42d88b08a5bff61b68c3dd18d5ed1a85816dd83e\"" Jul 2 00:24:54.581460 containerd[1736]: time="2024-07-02T00:24:54.581329696Z" level=info msg="StartContainer for \"31db43ebdcb56f2ba8b47e8d42d88b08a5bff61b68c3dd18d5ed1a85816dd83e\"" Jul 2 00:24:54.609762 systemd[1]: Started cri-containerd-31db43ebdcb56f2ba8b47e8d42d88b08a5bff61b68c3dd18d5ed1a85816dd83e.scope - libcontainer container 31db43ebdcb56f2ba8b47e8d42d88b08a5bff61b68c3dd18d5ed1a85816dd83e. Jul 2 00:24:54.624042 containerd[1736]: time="2024-07-02T00:24:54.623749720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:54.624042 containerd[1736]: time="2024-07-02T00:24:54.623816320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:54.624042 containerd[1736]: time="2024-07-02T00:24:54.623835160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:54.624042 containerd[1736]: time="2024-07-02T00:24:54.623848520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:54.647781 systemd[1]: Started cri-containerd-b6cd7a7f5bf4031dd6f3ad0dfbe41bc198f3e5edcd6ca76d0fb86cff188fdced.scope - libcontainer container b6cd7a7f5bf4031dd6f3ad0dfbe41bc198f3e5edcd6ca76d0fb86cff188fdced. Jul 2 00:24:54.650786 containerd[1736]: time="2024-07-02T00:24:54.650729394Z" level=info msg="StartContainer for \"31db43ebdcb56f2ba8b47e8d42d88b08a5bff61b68c3dd18d5ed1a85816dd83e\" returns successfully" Jul 2 00:24:54.689818 containerd[1736]: time="2024-07-02T00:24:54.689590029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-mx6lr,Uid:79ebbea4-fb32-4128-a399-17a785b530cf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b6cd7a7f5bf4031dd6f3ad0dfbe41bc198f3e5edcd6ca76d0fb86cff188fdced\"" Jul 2 00:24:54.693192 containerd[1736]: time="2024-07-02T00:24:54.692374580Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 00:24:54.769713 kubelet[3211]: I0702 00:24:54.768998 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbvgg" podStartSLOduration=0.768980415 podStartE2EDuration="768.980415ms" podCreationTimestamp="2024-07-02 00:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:24:54.768800256 +0000 UTC m=+16.186812706" watchObservedRunningTime="2024-07-02 00:24:54.768980415 +0000 UTC m=+16.186992865" Jul 2 00:24:56.140742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380896843.mount: Deactivated successfully. Jul 2 00:24:56.461512 containerd[1736]: time="2024-07-02T00:24:56.461381401Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:56.464030 containerd[1736]: time="2024-07-02T00:24:56.463980313Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473662" Jul 2 00:24:56.468793 containerd[1736]: time="2024-07-02T00:24:56.468733898Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:56.475252 containerd[1736]: time="2024-07-02T00:24:56.474994238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:56.476645 containerd[1736]: time="2024-07-02T00:24:56.476611273Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 1.784195093s" Jul 2 00:24:56.476847 containerd[1736]: time="2024-07-02T00:24:56.476748872Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jul 2 00:24:56.479149 containerd[1736]: time="2024-07-02T00:24:56.478951785Z" level=info msg="CreateContainer within sandbox \"b6cd7a7f5bf4031dd6f3ad0dfbe41bc198f3e5edcd6ca76d0fb86cff188fdced\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 00:24:56.512090 containerd[1736]: time="2024-07-02T00:24:56.512033759Z" level=info msg="CreateContainer within sandbox \"b6cd7a7f5bf4031dd6f3ad0dfbe41bc198f3e5edcd6ca76d0fb86cff188fdced\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d\"" Jul 2 00:24:56.512721 containerd[1736]: time="2024-07-02T00:24:56.512690317Z" level=info msg="StartContainer for \"d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d\"" Jul 2 00:24:56.540743 systemd[1]: Started cri-containerd-d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d.scope - libcontainer container d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d. Jul 2 00:24:56.570492 containerd[1736]: time="2024-07-02T00:24:56.570428813Z" level=info msg="StartContainer for \"d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d\" returns successfully" Jul 2 00:24:58.702270 kubelet[3211]: I0702 00:24:58.701815 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-mx6lr" podStartSLOduration=2.916218906 podStartE2EDuration="4.701795394s" podCreationTimestamp="2024-07-02 00:24:54 +0000 UTC" firstStartedPulling="2024-07-02 00:24:54.691794102 +0000 UTC m=+16.109806512" lastFinishedPulling="2024-07-02 00:24:56.47737055 +0000 UTC m=+17.895383000" observedRunningTime="2024-07-02 00:24:56.77433136 +0000 UTC m=+18.192343810" watchObservedRunningTime="2024-07-02 00:24:58.701795394 +0000 UTC m=+20.119807844" Jul 2 00:25:01.167841 kubelet[3211]: I0702 00:25:01.167744 3211 topology_manager.go:215] "Topology Admit Handler" podUID="a6a69536-6190-4d9b-aadf-b689bb45fe49" podNamespace="calico-system" podName="calico-typha-5c478f5d7c-97d8z" Jul 2 00:25:01.180716 systemd[1]: Created slice kubepods-besteffort-poda6a69536_6190_4d9b_aadf_b689bb45fe49.slice - libcontainer container kubepods-besteffort-poda6a69536_6190_4d9b_aadf_b689bb45fe49.slice. Jul 2 00:25:01.229170 kubelet[3211]: I0702 00:25:01.228294 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a6a69536-6190-4d9b-aadf-b689bb45fe49-typha-certs\") pod \"calico-typha-5c478f5d7c-97d8z\" (UID: \"a6a69536-6190-4d9b-aadf-b689bb45fe49\") " pod="calico-system/calico-typha-5c478f5d7c-97d8z" Jul 2 00:25:01.229519 kubelet[3211]: I0702 00:25:01.229423 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6a69536-6190-4d9b-aadf-b689bb45fe49-tigera-ca-bundle\") pod \"calico-typha-5c478f5d7c-97d8z\" (UID: \"a6a69536-6190-4d9b-aadf-b689bb45fe49\") " pod="calico-system/calico-typha-5c478f5d7c-97d8z" Jul 2 00:25:01.229519 kubelet[3211]: I0702 00:25:01.229460 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkshz\" (UniqueName: \"kubernetes.io/projected/a6a69536-6190-4d9b-aadf-b689bb45fe49-kube-api-access-zkshz\") pod \"calico-typha-5c478f5d7c-97d8z\" (UID: \"a6a69536-6190-4d9b-aadf-b689bb45fe49\") " pod="calico-system/calico-typha-5c478f5d7c-97d8z" Jul 2 00:25:01.275030 kubelet[3211]: I0702 00:25:01.274986 3211 topology_manager.go:215] "Topology Admit Handler" podUID="3cfdc751-f451-42da-b40c-59f3dc4e4385" podNamespace="calico-system" podName="calico-node-fqj4x" Jul 2 00:25:01.282331 systemd[1]: Created slice kubepods-besteffort-pod3cfdc751_f451_42da_b40c_59f3dc4e4385.slice - libcontainer container kubepods-besteffort-pod3cfdc751_f451_42da_b40c_59f3dc4e4385.slice. Jul 2 00:25:01.329880 kubelet[3211]: I0702 00:25:01.329836 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-flexvol-driver-host\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330208 kubelet[3211]: I0702 00:25:01.330060 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-var-lib-calico\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330208 kubelet[3211]: I0702 00:25:01.330089 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-cni-net-dir\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330208 kubelet[3211]: I0702 00:25:01.330107 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cfdc751-f451-42da-b40c-59f3dc4e4385-tigera-ca-bundle\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330208 kubelet[3211]: I0702 00:25:01.330138 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-xtables-lock\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330208 kubelet[3211]: I0702 00:25:01.330157 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-cni-bin-dir\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330394 kubelet[3211]: I0702 00:25:01.330171 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77bzb\" (UniqueName: \"kubernetes.io/projected/3cfdc751-f451-42da-b40c-59f3dc4e4385-kube-api-access-77bzb\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330649 kubelet[3211]: I0702 00:25:01.330462 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-lib-modules\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330649 kubelet[3211]: I0702 00:25:01.330490 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-policysync\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330649 kubelet[3211]: I0702 00:25:01.330528 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3cfdc751-f451-42da-b40c-59f3dc4e4385-node-certs\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330649 kubelet[3211]: I0702 00:25:01.330548 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-cni-log-dir\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.330649 kubelet[3211]: I0702 00:25:01.330609 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3cfdc751-f451-42da-b40c-59f3dc4e4385-var-run-calico\") pod \"calico-node-fqj4x\" (UID: \"3cfdc751-f451-42da-b40c-59f3dc4e4385\") " pod="calico-system/calico-node-fqj4x" Jul 2 00:25:01.405283 kubelet[3211]: I0702 00:25:01.403952 3211 topology_manager.go:215] "Topology Admit Handler" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" podNamespace="calico-system" podName="csi-node-driver-pw44m" Jul 2 00:25:01.406101 kubelet[3211]: E0702 00:25:01.405779 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:01.431663 kubelet[3211]: I0702 00:25:01.430938 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/49c41bcc-2760-45cb-87bc-55a1cf3e0250-registration-dir\") pod \"csi-node-driver-pw44m\" (UID: \"49c41bcc-2760-45cb-87bc-55a1cf3e0250\") " pod="calico-system/csi-node-driver-pw44m" Jul 2 00:25:01.431663 kubelet[3211]: I0702 00:25:01.431026 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/49c41bcc-2760-45cb-87bc-55a1cf3e0250-varrun\") pod \"csi-node-driver-pw44m\" (UID: \"49c41bcc-2760-45cb-87bc-55a1cf3e0250\") " pod="calico-system/csi-node-driver-pw44m" Jul 2 00:25:01.431663 kubelet[3211]: I0702 00:25:01.431055 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49c41bcc-2760-45cb-87bc-55a1cf3e0250-kubelet-dir\") pod \"csi-node-driver-pw44m\" (UID: \"49c41bcc-2760-45cb-87bc-55a1cf3e0250\") " pod="calico-system/csi-node-driver-pw44m" Jul 2 00:25:01.431663 kubelet[3211]: I0702 00:25:01.431101 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqv6\" (UniqueName: \"kubernetes.io/projected/49c41bcc-2760-45cb-87bc-55a1cf3e0250-kube-api-access-pvqv6\") pod \"csi-node-driver-pw44m\" (UID: \"49c41bcc-2760-45cb-87bc-55a1cf3e0250\") " pod="calico-system/csi-node-driver-pw44m" Jul 2 00:25:01.431663 kubelet[3211]: I0702 00:25:01.431118 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/49c41bcc-2760-45cb-87bc-55a1cf3e0250-socket-dir\") pod \"csi-node-driver-pw44m\" (UID: \"49c41bcc-2760-45cb-87bc-55a1cf3e0250\") " pod="calico-system/csi-node-driver-pw44m" Jul 2 00:25:01.437643 kubelet[3211]: E0702 00:25:01.437604 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.437811 kubelet[3211]: W0702 00:25:01.437796 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.437909 kubelet[3211]: E0702 00:25:01.437896 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.439923 kubelet[3211]: E0702 00:25:01.439894 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.440132 kubelet[3211]: W0702 00:25:01.440055 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.440132 kubelet[3211]: E0702 00:25:01.440085 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.443393 kubelet[3211]: E0702 00:25:01.443268 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.443393 kubelet[3211]: W0702 00:25:01.443291 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.443393 kubelet[3211]: E0702 00:25:01.443312 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.460331 kubelet[3211]: E0702 00:25:01.460271 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.460331 kubelet[3211]: W0702 00:25:01.460300 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.460331 kubelet[3211]: E0702 00:25:01.460321 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.488988 containerd[1736]: time="2024-07-02T00:25:01.488499122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c478f5d7c-97d8z,Uid:a6a69536-6190-4d9b-aadf-b689bb45fe49,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:01.532550 kubelet[3211]: E0702 00:25:01.532513 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.532550 kubelet[3211]: W0702 00:25:01.532541 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.532740 kubelet[3211]: E0702 00:25:01.532595 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.532834 kubelet[3211]: E0702 00:25:01.532814 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.532834 kubelet[3211]: W0702 00:25:01.532830 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.532834 kubelet[3211]: E0702 00:25:01.532841 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.533243 kubelet[3211]: E0702 00:25:01.533227 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.533609 kubelet[3211]: W0702 00:25:01.533362 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.533609 kubelet[3211]: E0702 00:25:01.533393 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.534312 kubelet[3211]: E0702 00:25:01.534280 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.534312 kubelet[3211]: W0702 00:25:01.534302 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.534469 kubelet[3211]: E0702 00:25:01.534321 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.535270 kubelet[3211]: E0702 00:25:01.534653 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.535270 kubelet[3211]: W0702 00:25:01.534671 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.535270 kubelet[3211]: E0702 00:25:01.534732 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.535270 kubelet[3211]: E0702 00:25:01.534952 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.535270 kubelet[3211]: W0702 00:25:01.534996 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.535270 kubelet[3211]: E0702 00:25:01.535139 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.535914 kubelet[3211]: E0702 00:25:01.535857 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.535914 kubelet[3211]: W0702 00:25:01.535896 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.536234 kubelet[3211]: E0702 00:25:01.536128 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.537189 kubelet[3211]: E0702 00:25:01.536916 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.537189 kubelet[3211]: W0702 00:25:01.537033 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.537542 kubelet[3211]: E0702 00:25:01.537490 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.538270 kubelet[3211]: E0702 00:25:01.538133 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.538270 kubelet[3211]: W0702 00:25:01.538148 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.538585 kubelet[3211]: E0702 00:25:01.538529 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.539932 kubelet[3211]: E0702 00:25:01.539895 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.540379 kubelet[3211]: W0702 00:25:01.540284 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.540623 kubelet[3211]: E0702 00:25:01.540500 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.541103 kubelet[3211]: E0702 00:25:01.541026 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.541103 kubelet[3211]: W0702 00:25:01.541040 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.541297 kubelet[3211]: E0702 00:25:01.541196 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.541687 kubelet[3211]: E0702 00:25:01.541615 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.541687 kubelet[3211]: W0702 00:25:01.541628 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.542063 kubelet[3211]: E0702 00:25:01.541947 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.542478 kubelet[3211]: E0702 00:25:01.542365 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.542478 kubelet[3211]: W0702 00:25:01.542391 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.542883 kubelet[3211]: E0702 00:25:01.542815 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.543032 kubelet[3211]: E0702 00:25:01.542954 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.543032 kubelet[3211]: W0702 00:25:01.542963 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.543255 kubelet[3211]: E0702 00:25:01.543141 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.543688 kubelet[3211]: E0702 00:25:01.543505 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.543688 kubelet[3211]: W0702 00:25:01.543519 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.543688 kubelet[3211]: E0702 00:25:01.543590 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.545278 kubelet[3211]: E0702 00:25:01.545135 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.545278 kubelet[3211]: W0702 00:25:01.545150 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.545278 kubelet[3211]: E0702 00:25:01.545253 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.545708 kubelet[3211]: E0702 00:25:01.545551 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.545708 kubelet[3211]: W0702 00:25:01.545634 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.545906 kubelet[3211]: E0702 00:25:01.545799 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.546162 kubelet[3211]: E0702 00:25:01.546139 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.546332 kubelet[3211]: W0702 00:25:01.546256 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.546472 kubelet[3211]: E0702 00:25:01.546395 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.546858 kubelet[3211]: E0702 00:25:01.546763 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.546858 kubelet[3211]: W0702 00:25:01.546792 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.547119 kubelet[3211]: E0702 00:25:01.547044 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.547369 kubelet[3211]: E0702 00:25:01.547287 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.547369 kubelet[3211]: W0702 00:25:01.547299 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.548540 kubelet[3211]: E0702 00:25:01.547412 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.548874 kubelet[3211]: E0702 00:25:01.548714 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.548874 kubelet[3211]: W0702 00:25:01.548727 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.549151 kubelet[3211]: E0702 00:25:01.548989 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.549524 kubelet[3211]: E0702 00:25:01.549509 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.549922 kubelet[3211]: W0702 00:25:01.549808 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.550099 kubelet[3211]: E0702 00:25:01.549995 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.550475 kubelet[3211]: E0702 00:25:01.550265 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.550475 kubelet[3211]: W0702 00:25:01.550285 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.552001 kubelet[3211]: E0702 00:25:01.551793 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.552001 kubelet[3211]: W0702 00:25:01.551806 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.552001 kubelet[3211]: E0702 00:25:01.551819 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.552001 kubelet[3211]: E0702 00:25:01.551849 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.552272 kubelet[3211]: E0702 00:25:01.552260 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.552446 kubelet[3211]: W0702 00:25:01.552429 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.552581 kubelet[3211]: E0702 00:25:01.552518 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.553318 containerd[1736]: time="2024-07-02T00:25:01.550759767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:01.553318 containerd[1736]: time="2024-07-02T00:25:01.550820247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.553318 containerd[1736]: time="2024-07-02T00:25:01.550845367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:01.553318 containerd[1736]: time="2024-07-02T00:25:01.550859807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.568539 kubelet[3211]: E0702 00:25:01.568512 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:01.568715 kubelet[3211]: W0702 00:25:01.568698 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:01.568826 kubelet[3211]: E0702 00:25:01.568810 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:01.578825 systemd[1]: Started cri-containerd-e4c18ff1952d30cbd49917819b17def20767a748a28bcdcf19c96157b89b636b.scope - libcontainer container e4c18ff1952d30cbd49917819b17def20767a748a28bcdcf19c96157b89b636b. Jul 2 00:25:01.587325 containerd[1736]: time="2024-07-02T00:25:01.587276533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fqj4x,Uid:3cfdc751-f451-42da-b40c-59f3dc4e4385,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:01.631407 containerd[1736]: time="2024-07-02T00:25:01.631294276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c478f5d7c-97d8z,Uid:a6a69536-6190-4d9b-aadf-b689bb45fe49,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4c18ff1952d30cbd49917819b17def20767a748a28bcdcf19c96157b89b636b\"" Jul 2 00:25:01.634802 containerd[1736]: time="2024-07-02T00:25:01.634651865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 00:25:01.643721 containerd[1736]: time="2024-07-02T00:25:01.643280638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:01.643721 containerd[1736]: time="2024-07-02T00:25:01.643341638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.643721 containerd[1736]: time="2024-07-02T00:25:01.643362478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:01.643721 containerd[1736]: time="2024-07-02T00:25:01.643376398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:01.663758 systemd[1]: Started cri-containerd-27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428.scope - libcontainer container 27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428. Jul 2 00:25:01.702404 containerd[1736]: time="2024-07-02T00:25:01.702174294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fqj4x,Uid:3cfdc751-f451-42da-b40c-59f3dc4e4385,Namespace:calico-system,Attempt:0,} returns sandbox id \"27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428\"" Jul 2 00:25:02.694583 kubelet[3211]: E0702 00:25:02.693771 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:03.818388 containerd[1736]: time="2024-07-02T00:25:03.818336044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:03.820940 containerd[1736]: time="2024-07-02T00:25:03.820895396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jul 2 00:25:03.825077 containerd[1736]: time="2024-07-02T00:25:03.824595465Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:03.832340 containerd[1736]: time="2024-07-02T00:25:03.832284321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:03.834539 containerd[1736]: time="2024-07-02T00:25:03.834488674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.199786969s" Jul 2 00:25:03.834766 containerd[1736]: time="2024-07-02T00:25:03.834745673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jul 2 00:25:03.841358 containerd[1736]: time="2024-07-02T00:25:03.841307973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:25:03.853163 containerd[1736]: time="2024-07-02T00:25:03.852951896Z" level=info msg="CreateContainer within sandbox \"e4c18ff1952d30cbd49917819b17def20767a748a28bcdcf19c96157b89b636b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 00:25:03.893001 containerd[1736]: time="2024-07-02T00:25:03.892959051Z" level=info msg="CreateContainer within sandbox \"e4c18ff1952d30cbd49917819b17def20767a748a28bcdcf19c96157b89b636b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"32e7c80b5758c6f058b9ce22e7530e94f7162a823f5fa7219fb7a173a5b770f2\"" Jul 2 00:25:03.893805 containerd[1736]: time="2024-07-02T00:25:03.893781009Z" level=info msg="StartContainer for \"32e7c80b5758c6f058b9ce22e7530e94f7162a823f5fa7219fb7a173a5b770f2\"" Jul 2 00:25:03.926783 systemd[1]: Started cri-containerd-32e7c80b5758c6f058b9ce22e7530e94f7162a823f5fa7219fb7a173a5b770f2.scope - libcontainer container 32e7c80b5758c6f058b9ce22e7530e94f7162a823f5fa7219fb7a173a5b770f2. Jul 2 00:25:03.964635 containerd[1736]: time="2024-07-02T00:25:03.964495548Z" level=info msg="StartContainer for \"32e7c80b5758c6f058b9ce22e7530e94f7162a823f5fa7219fb7a173a5b770f2\" returns successfully" Jul 2 00:25:04.694235 kubelet[3211]: E0702 00:25:04.693835 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:04.797432 kubelet[3211]: I0702 00:25:04.797329 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c478f5d7c-97d8z" podStartSLOduration=1.594281067 podStartE2EDuration="3.797147987s" podCreationTimestamp="2024-07-02 00:25:01 +0000 UTC" firstStartedPulling="2024-07-02 00:25:01.633658588 +0000 UTC m=+23.051671038" lastFinishedPulling="2024-07-02 00:25:03.836525548 +0000 UTC m=+25.254537958" observedRunningTime="2024-07-02 00:25:04.795757391 +0000 UTC m=+26.213769881" watchObservedRunningTime="2024-07-02 00:25:04.797147987 +0000 UTC m=+26.215160437" Jul 2 00:25:04.850108 kubelet[3211]: E0702 00:25:04.850027 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.850108 kubelet[3211]: W0702 00:25:04.850056 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.850108 kubelet[3211]: E0702 00:25:04.850079 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.851743 kubelet[3211]: E0702 00:25:04.851706 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.851743 kubelet[3211]: W0702 00:25:04.851731 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.851892 kubelet[3211]: E0702 00:25:04.851755 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.852056 kubelet[3211]: E0702 00:25:04.851982 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.852056 kubelet[3211]: W0702 00:25:04.851999 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.852056 kubelet[3211]: E0702 00:25:04.852009 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.852314 kubelet[3211]: E0702 00:25:04.852185 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.852314 kubelet[3211]: W0702 00:25:04.852199 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.852314 kubelet[3211]: E0702 00:25:04.852208 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.852396 kubelet[3211]: E0702 00:25:04.852380 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.852396 kubelet[3211]: W0702 00:25:04.852389 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.852444 kubelet[3211]: E0702 00:25:04.852397 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.852657 kubelet[3211]: E0702 00:25:04.852537 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.852657 kubelet[3211]: W0702 00:25:04.852550 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.852657 kubelet[3211]: E0702 00:25:04.852575 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.852753 kubelet[3211]: E0702 00:25:04.852721 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.852753 kubelet[3211]: W0702 00:25:04.852729 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.852753 kubelet[3211]: E0702 00:25:04.852737 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.853002 kubelet[3211]: E0702 00:25:04.852888 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.853002 kubelet[3211]: W0702 00:25:04.852902 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.853002 kubelet[3211]: E0702 00:25:04.852911 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.853203 kubelet[3211]: E0702 00:25:04.853141 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.853203 kubelet[3211]: W0702 00:25:04.853156 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.853203 kubelet[3211]: E0702 00:25:04.853167 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.853678 kubelet[3211]: E0702 00:25:04.853655 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.853678 kubelet[3211]: W0702 00:25:04.853670 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.853770 kubelet[3211]: E0702 00:25:04.853682 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.854119 kubelet[3211]: E0702 00:25:04.854095 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.854173 kubelet[3211]: W0702 00:25:04.854124 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.854173 kubelet[3211]: E0702 00:25:04.854138 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.854594 kubelet[3211]: E0702 00:25:04.854399 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.854594 kubelet[3211]: W0702 00:25:04.854450 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.854594 kubelet[3211]: E0702 00:25:04.854463 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.854738 kubelet[3211]: E0702 00:25:04.854717 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.854738 kubelet[3211]: W0702 00:25:04.854732 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.854796 kubelet[3211]: E0702 00:25:04.854757 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.854963 kubelet[3211]: E0702 00:25:04.854939 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.854963 kubelet[3211]: W0702 00:25:04.854953 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.854963 kubelet[3211]: E0702 00:25:04.854962 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.855150 kubelet[3211]: E0702 00:25:04.855129 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.855186 kubelet[3211]: W0702 00:25:04.855156 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.855186 kubelet[3211]: E0702 00:25:04.855169 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.863830 kubelet[3211]: E0702 00:25:04.863794 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.863830 kubelet[3211]: W0702 00:25:04.863818 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.863830 kubelet[3211]: E0702 00:25:04.863837 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.864419 kubelet[3211]: E0702 00:25:04.864127 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.864419 kubelet[3211]: W0702 00:25:04.864138 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.864419 kubelet[3211]: E0702 00:25:04.864148 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.864419 kubelet[3211]: E0702 00:25:04.864296 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.864419 kubelet[3211]: W0702 00:25:04.864304 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.864419 kubelet[3211]: E0702 00:25:04.864313 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.864897 kubelet[3211]: E0702 00:25:04.864476 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.864897 kubelet[3211]: W0702 00:25:04.864484 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.864897 kubelet[3211]: E0702 00:25:04.864501 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.864897 kubelet[3211]: E0702 00:25:04.864654 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.864897 kubelet[3211]: W0702 00:25:04.864662 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.864897 kubelet[3211]: E0702 00:25:04.864677 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.864897 kubelet[3211]: E0702 00:25:04.864824 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.864897 kubelet[3211]: W0702 00:25:04.864832 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.864897 kubelet[3211]: E0702 00:25:04.864847 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.865599 kubelet[3211]: E0702 00:25:04.865007 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.865599 kubelet[3211]: W0702 00:25:04.865014 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.865599 kubelet[3211]: E0702 00:25:04.865030 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.865599 kubelet[3211]: E0702 00:25:04.865397 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.865599 kubelet[3211]: W0702 00:25:04.865410 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.865599 kubelet[3211]: E0702 00:25:04.865428 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.865727 kubelet[3211]: E0702 00:25:04.865627 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.865727 kubelet[3211]: W0702 00:25:04.865636 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.865727 kubelet[3211]: E0702 00:25:04.865645 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.866572 kubelet[3211]: E0702 00:25:04.865782 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.866572 kubelet[3211]: W0702 00:25:04.865789 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.866572 kubelet[3211]: E0702 00:25:04.865797 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.866572 kubelet[3211]: E0702 00:25:04.865930 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.866572 kubelet[3211]: W0702 00:25:04.865937 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.866572 kubelet[3211]: E0702 00:25:04.865946 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.866572 kubelet[3211]: E0702 00:25:04.866361 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.866572 kubelet[3211]: W0702 00:25:04.866375 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.866572 kubelet[3211]: E0702 00:25:04.866498 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.866999 kubelet[3211]: E0702 00:25:04.866977 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.866999 kubelet[3211]: W0702 00:25:04.866996 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.867075 kubelet[3211]: E0702 00:25:04.867015 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.867258 kubelet[3211]: E0702 00:25:04.867196 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.867258 kubelet[3211]: W0702 00:25:04.867212 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.867258 kubelet[3211]: E0702 00:25:04.867224 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.867726 kubelet[3211]: E0702 00:25:04.867351 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.867726 kubelet[3211]: W0702 00:25:04.867358 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.867726 kubelet[3211]: E0702 00:25:04.867367 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.867726 kubelet[3211]: E0702 00:25:04.867493 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.867726 kubelet[3211]: W0702 00:25:04.867500 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.867726 kubelet[3211]: E0702 00:25:04.867507 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.867726 kubelet[3211]: E0702 00:25:04.867660 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.867726 kubelet[3211]: W0702 00:25:04.867669 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.867726 kubelet[3211]: E0702 00:25:04.867677 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:04.868315 kubelet[3211]: E0702 00:25:04.868064 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:04.868315 kubelet[3211]: W0702 00:25:04.868079 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:04.868315 kubelet[3211]: E0702 00:25:04.868089 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.781585 containerd[1736]: time="2024-07-02T00:25:05.781018114Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:05.784319 containerd[1736]: time="2024-07-02T00:25:05.784269104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jul 2 00:25:05.787900 kubelet[3211]: I0702 00:25:05.787870 3211 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:25:05.789069 containerd[1736]: time="2024-07-02T00:25:05.788189452Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:05.792644 containerd[1736]: time="2024-07-02T00:25:05.792180879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:05.793170 containerd[1736]: time="2024-07-02T00:25:05.793138796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.951642464s" Jul 2 00:25:05.793260 containerd[1736]: time="2024-07-02T00:25:05.793245076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jul 2 00:25:05.797531 containerd[1736]: time="2024-07-02T00:25:05.797190104Z" level=info msg="CreateContainer within sandbox \"27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:25:05.848588 containerd[1736]: time="2024-07-02T00:25:05.848524303Z" level=info msg="CreateContainer within sandbox \"27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49\"" Jul 2 00:25:05.851657 containerd[1736]: time="2024-07-02T00:25:05.850614777Z" level=info msg="StartContainer for \"63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49\"" Jul 2 00:25:05.863926 kubelet[3211]: E0702 00:25:05.863775 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.863926 kubelet[3211]: W0702 00:25:05.863800 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.863926 kubelet[3211]: E0702 00:25:05.863821 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.866785 kubelet[3211]: E0702 00:25:05.865141 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.866785 kubelet[3211]: W0702 00:25:05.865161 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.866785 kubelet[3211]: E0702 00:25:05.865181 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.868924 kubelet[3211]: E0702 00:25:05.868604 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.868924 kubelet[3211]: W0702 00:25:05.868640 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.868924 kubelet[3211]: E0702 00:25:05.868663 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.869359 kubelet[3211]: E0702 00:25:05.869234 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.869359 kubelet[3211]: W0702 00:25:05.869250 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.869359 kubelet[3211]: E0702 00:25:05.869290 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.870198 kubelet[3211]: E0702 00:25:05.870049 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.870198 kubelet[3211]: W0702 00:25:05.870064 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.870198 kubelet[3211]: E0702 00:25:05.870076 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.871085 kubelet[3211]: E0702 00:25:05.870962 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.871085 kubelet[3211]: W0702 00:25:05.870978 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.871085 kubelet[3211]: E0702 00:25:05.870991 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.872203 kubelet[3211]: E0702 00:25:05.872182 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.872482 kubelet[3211]: W0702 00:25:05.872284 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.872482 kubelet[3211]: E0702 00:25:05.872302 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.874583 kubelet[3211]: E0702 00:25:05.873017 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.874583 kubelet[3211]: W0702 00:25:05.873033 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.874583 kubelet[3211]: E0702 00:25:05.873046 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.874908 kubelet[3211]: E0702 00:25:05.874785 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.874908 kubelet[3211]: W0702 00:25:05.874801 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.874908 kubelet[3211]: E0702 00:25:05.874813 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.877046 kubelet[3211]: E0702 00:25:05.875057 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.877046 kubelet[3211]: W0702 00:25:05.875069 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.877046 kubelet[3211]: E0702 00:25:05.875078 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.877834 kubelet[3211]: E0702 00:25:05.877435 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.877834 kubelet[3211]: W0702 00:25:05.877450 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.877834 kubelet[3211]: E0702 00:25:05.877462 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.878811 kubelet[3211]: E0702 00:25:05.878791 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.879368 kubelet[3211]: W0702 00:25:05.879328 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.879368 kubelet[3211]: E0702 00:25:05.879358 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.880521 kubelet[3211]: E0702 00:25:05.880445 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.880521 kubelet[3211]: W0702 00:25:05.880466 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.880521 kubelet[3211]: E0702 00:25:05.880479 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.881101 kubelet[3211]: E0702 00:25:05.881071 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.881101 kubelet[3211]: W0702 00:25:05.881096 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.881187 kubelet[3211]: E0702 00:25:05.881108 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.881661 kubelet[3211]: E0702 00:25:05.881642 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.881661 kubelet[3211]: W0702 00:25:05.881662 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.881754 kubelet[3211]: E0702 00:25:05.881673 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.883156 kubelet[3211]: E0702 00:25:05.883107 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.883156 kubelet[3211]: W0702 00:25:05.883123 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.883156 kubelet[3211]: E0702 00:25:05.883135 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.883546 kubelet[3211]: E0702 00:25:05.883493 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.883546 kubelet[3211]: W0702 00:25:05.883510 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.883661 kubelet[3211]: E0702 00:25:05.883586 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.884393 kubelet[3211]: E0702 00:25:05.884265 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.884393 kubelet[3211]: W0702 00:25:05.884284 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.884393 kubelet[3211]: E0702 00:25:05.884299 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.884754 kubelet[3211]: E0702 00:25:05.884665 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.884754 kubelet[3211]: W0702 00:25:05.884678 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.884754 kubelet[3211]: E0702 00:25:05.884690 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.885036 kubelet[3211]: E0702 00:25:05.884920 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.885036 kubelet[3211]: W0702 00:25:05.884931 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.885036 kubelet[3211]: E0702 00:25:05.884941 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.885199 kubelet[3211]: E0702 00:25:05.885188 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.885257 kubelet[3211]: W0702 00:25:05.885246 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.885522 kubelet[3211]: E0702 00:25:05.885299 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.885661 kubelet[3211]: E0702 00:25:05.885649 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.885731 kubelet[3211]: W0702 00:25:05.885720 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.885787 kubelet[3211]: E0702 00:25:05.885778 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.886005 kubelet[3211]: E0702 00:25:05.885992 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.886081 kubelet[3211]: W0702 00:25:05.886070 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.886137 kubelet[3211]: E0702 00:25:05.886128 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.886367 kubelet[3211]: E0702 00:25:05.886353 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.886526 kubelet[3211]: W0702 00:25:05.886435 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.886526 kubelet[3211]: E0702 00:25:05.886450 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.886692 kubelet[3211]: E0702 00:25:05.886681 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.886756 kubelet[3211]: W0702 00:25:05.886745 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.886869 kubelet[3211]: E0702 00:25:05.886796 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.886965 kubelet[3211]: E0702 00:25:05.886954 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.887072 kubelet[3211]: W0702 00:25:05.887060 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.887212 kubelet[3211]: E0702 00:25:05.887118 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.887305 kubelet[3211]: E0702 00:25:05.887294 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.887364 kubelet[3211]: W0702 00:25:05.887353 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.887417 kubelet[3211]: E0702 00:25:05.887407 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.887749 systemd[1]: Started cri-containerd-63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49.scope - libcontainer container 63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49. Jul 2 00:25:05.888818 kubelet[3211]: E0702 00:25:05.888183 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.888818 kubelet[3211]: W0702 00:25:05.888197 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.888818 kubelet[3211]: E0702 00:25:05.888290 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.889015 kubelet[3211]: E0702 00:25:05.889002 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.889188 kubelet[3211]: W0702 00:25:05.889086 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.889188 kubelet[3211]: E0702 00:25:05.889103 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.889432 kubelet[3211]: E0702 00:25:05.889334 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.889432 kubelet[3211]: W0702 00:25:05.889347 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.889432 kubelet[3211]: E0702 00:25:05.889357 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.889612 kubelet[3211]: E0702 00:25:05.889589 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.889780 kubelet[3211]: W0702 00:25:05.889665 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.889780 kubelet[3211]: E0702 00:25:05.889681 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.889917 kubelet[3211]: E0702 00:25:05.889905 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.889969 kubelet[3211]: W0702 00:25:05.889959 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.890430 kubelet[3211]: E0702 00:25:05.890087 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.890533 kubelet[3211]: E0702 00:25:05.890519 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:25:05.890626 kubelet[3211]: W0702 00:25:05.890614 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:25:05.890697 kubelet[3211]: E0702 00:25:05.890685 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:25:05.918550 containerd[1736]: time="2024-07-02T00:25:05.918507565Z" level=info msg="StartContainer for \"63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49\" returns successfully" Jul 2 00:25:05.930063 systemd[1]: cri-containerd-63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49.scope: Deactivated successfully. Jul 2 00:25:05.955457 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49-rootfs.mount: Deactivated successfully. Jul 2 00:25:06.693914 kubelet[3211]: E0702 00:25:06.692522 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:06.852497 containerd[1736]: time="2024-07-02T00:25:06.852421128Z" level=info msg="shim disconnected" id=63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49 namespace=k8s.io Jul 2 00:25:06.852497 containerd[1736]: time="2024-07-02T00:25:06.852481687Z" level=warning msg="cleaning up after shim disconnected" id=63457837b170d1e6b36829f95d5d7b683a6eabb3da8a0190f7c0b03c0ae78d49 namespace=k8s.io Jul 2 00:25:06.852497 containerd[1736]: time="2024-07-02T00:25:06.852491127Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:07.796935 containerd[1736]: time="2024-07-02T00:25:07.796631978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:25:08.693593 kubelet[3211]: E0702 00:25:08.693005 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:10.693814 kubelet[3211]: E0702 00:25:10.692817 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:11.753398 containerd[1736]: time="2024-07-02T00:25:11.753338325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:11.756005 containerd[1736]: time="2024-07-02T00:25:11.755955198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jul 2 00:25:11.759759 containerd[1736]: time="2024-07-02T00:25:11.759708267Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:11.764271 containerd[1736]: time="2024-07-02T00:25:11.764211693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:11.765311 containerd[1736]: time="2024-07-02T00:25:11.765088371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.968415753s" Jul 2 00:25:11.765311 containerd[1736]: time="2024-07-02T00:25:11.765125451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jul 2 00:25:11.767764 containerd[1736]: time="2024-07-02T00:25:11.767698123Z" level=info msg="CreateContainer within sandbox \"27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:25:11.814967 containerd[1736]: time="2024-07-02T00:25:11.814829425Z" level=info msg="CreateContainer within sandbox \"27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889\"" Jul 2 00:25:11.815486 containerd[1736]: time="2024-07-02T00:25:11.815442383Z" level=info msg="StartContainer for \"416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889\"" Jul 2 00:25:11.855039 systemd[1]: run-containerd-runc-k8s.io-416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889-runc.0frIOv.mount: Deactivated successfully. Jul 2 00:25:11.864838 systemd[1]: Started cri-containerd-416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889.scope - libcontainer container 416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889. Jul 2 00:25:11.897944 containerd[1736]: time="2024-07-02T00:25:11.897803822Z" level=info msg="StartContainer for \"416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889\" returns successfully" Jul 2 00:25:12.693171 kubelet[3211]: E0702 00:25:12.692851 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:12.889470 systemd[1]: cri-containerd-416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889.scope: Deactivated successfully. Jul 2 00:25:12.899914 kubelet[3211]: I0702 00:25:12.899871 3211 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:25:12.921067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889-rootfs.mount: Deactivated successfully. Jul 2 00:25:12.935591 kubelet[3211]: I0702 00:25:12.932518 3211 topology_manager.go:215] "Topology Admit Handler" podUID="4e7e441b-83c6-45ca-9941-b2bb675b47a8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8rbpz" Jul 2 00:25:13.234459 kubelet[3211]: I0702 00:25:12.947183 3211 topology_manager.go:215] "Topology Admit Handler" podUID="dba33db4-da4e-40f4-abe2-61c91ad79197" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xkfm7" Jul 2 00:25:13.234459 kubelet[3211]: I0702 00:25:12.947587 3211 topology_manager.go:215] "Topology Admit Handler" podUID="83ee114b-f9bf-4eb9-8231-c5444d239bae" podNamespace="calico-system" podName="calico-kube-controllers-855df8f485-kwb2l" Jul 2 00:25:13.234459 kubelet[3211]: I0702 00:25:13.031580 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e7e441b-83c6-45ca-9941-b2bb675b47a8-config-volume\") pod \"coredns-7db6d8ff4d-8rbpz\" (UID: \"4e7e441b-83c6-45ca-9941-b2bb675b47a8\") " pod="kube-system/coredns-7db6d8ff4d-8rbpz" Jul 2 00:25:13.234459 kubelet[3211]: I0702 00:25:13.031626 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tnxr\" (UniqueName: \"kubernetes.io/projected/4e7e441b-83c6-45ca-9941-b2bb675b47a8-kube-api-access-2tnxr\") pod \"coredns-7db6d8ff4d-8rbpz\" (UID: \"4e7e441b-83c6-45ca-9941-b2bb675b47a8\") " pod="kube-system/coredns-7db6d8ff4d-8rbpz" Jul 2 00:25:13.234459 kubelet[3211]: I0702 00:25:13.131818 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5d9k\" (UniqueName: \"kubernetes.io/projected/83ee114b-f9bf-4eb9-8231-c5444d239bae-kube-api-access-d5d9k\") pod \"calico-kube-controllers-855df8f485-kwb2l\" (UID: \"83ee114b-f9bf-4eb9-8231-c5444d239bae\") " pod="calico-system/calico-kube-controllers-855df8f485-kwb2l" Jul 2 00:25:13.234459 kubelet[3211]: I0702 00:25:13.131856 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dba33db4-da4e-40f4-abe2-61c91ad79197-config-volume\") pod \"coredns-7db6d8ff4d-xkfm7\" (UID: \"dba33db4-da4e-40f4-abe2-61c91ad79197\") " pod="kube-system/coredns-7db6d8ff4d-xkfm7" Jul 2 00:25:12.956410 systemd[1]: Created slice kubepods-burstable-pod4e7e441b_83c6_45ca_9941_b2bb675b47a8.slice - libcontainer container kubepods-burstable-pod4e7e441b_83c6_45ca_9941_b2bb675b47a8.slice. Jul 2 00:25:13.234799 kubelet[3211]: I0702 00:25:13.131877 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbmwc\" (UniqueName: \"kubernetes.io/projected/dba33db4-da4e-40f4-abe2-61c91ad79197-kube-api-access-vbmwc\") pod \"coredns-7db6d8ff4d-xkfm7\" (UID: \"dba33db4-da4e-40f4-abe2-61c91ad79197\") " pod="kube-system/coredns-7db6d8ff4d-xkfm7" Jul 2 00:25:13.234799 kubelet[3211]: I0702 00:25:13.131897 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83ee114b-f9bf-4eb9-8231-c5444d239bae-tigera-ca-bundle\") pod \"calico-kube-controllers-855df8f485-kwb2l\" (UID: \"83ee114b-f9bf-4eb9-8231-c5444d239bae\") " pod="calico-system/calico-kube-controllers-855df8f485-kwb2l" Jul 2 00:25:12.965974 systemd[1]: Created slice kubepods-besteffort-pod83ee114b_f9bf_4eb9_8231_c5444d239bae.slice - libcontainer container kubepods-besteffort-pod83ee114b_f9bf_4eb9_8231_c5444d239bae.slice. Jul 2 00:25:12.975399 systemd[1]: Created slice kubepods-burstable-poddba33db4_da4e_40f4_abe2_61c91ad79197.slice - libcontainer container kubepods-burstable-poddba33db4_da4e_40f4_abe2_61c91ad79197.slice. Jul 2 00:25:13.533960 containerd[1736]: time="2024-07-02T00:25:13.533850832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855df8f485-kwb2l,Uid:83ee114b-f9bf-4eb9-8231-c5444d239bae,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:13.536075 containerd[1736]: time="2024-07-02T00:25:13.536034706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xkfm7,Uid:dba33db4-da4e-40f4-abe2-61c91ad79197,Namespace:kube-system,Attempt:0,}" Jul 2 00:25:13.536350 containerd[1736]: time="2024-07-02T00:25:13.536316385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rbpz,Uid:4e7e441b-83c6-45ca-9941-b2bb675b47a8,Namespace:kube-system,Attempt:0,}" Jul 2 00:25:14.049921 containerd[1736]: time="2024-07-02T00:25:14.049782241Z" level=info msg="shim disconnected" id=416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889 namespace=k8s.io Jul 2 00:25:14.049921 containerd[1736]: time="2024-07-02T00:25:14.049914561Z" level=warning msg="cleaning up after shim disconnected" id=416c92042328a3ef6ca979db081eebc3fbc7f9d2a761edfc67b6a301dbe68889 namespace=k8s.io Jul 2 00:25:14.049921 containerd[1736]: time="2024-07-02T00:25:14.049924801Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:25:14.183678 containerd[1736]: time="2024-07-02T00:25:14.183628489Z" level=error msg="Failed to destroy network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.187219 containerd[1736]: time="2024-07-02T00:25:14.187000840Z" level=error msg="encountered an error cleaning up failed sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.187219 containerd[1736]: time="2024-07-02T00:25:14.187085719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xkfm7,Uid:dba33db4-da4e-40f4-abe2-61c91ad79197,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.187722 kubelet[3211]: E0702 00:25:14.187412 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.187722 kubelet[3211]: E0702 00:25:14.187482 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xkfm7" Jul 2 00:25:14.187722 kubelet[3211]: E0702 00:25:14.187503 3211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-xkfm7" Jul 2 00:25:14.189216 kubelet[3211]: E0702 00:25:14.188304 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-xkfm7_kube-system(dba33db4-da4e-40f4-abe2-61c91ad79197)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-xkfm7_kube-system(dba33db4-da4e-40f4-abe2-61c91ad79197)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xkfm7" podUID="dba33db4-da4e-40f4-abe2-61c91ad79197" Jul 2 00:25:14.197035 containerd[1736]: time="2024-07-02T00:25:14.196985810Z" level=error msg="Failed to destroy network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.197547 containerd[1736]: time="2024-07-02T00:25:14.197491289Z" level=error msg="encountered an error cleaning up failed sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.197744 containerd[1736]: time="2024-07-02T00:25:14.197686328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855df8f485-kwb2l,Uid:83ee114b-f9bf-4eb9-8231-c5444d239bae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.198085 kubelet[3211]: E0702 00:25:14.198050 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.198154 kubelet[3211]: E0702 00:25:14.198112 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855df8f485-kwb2l" Jul 2 00:25:14.198154 kubelet[3211]: E0702 00:25:14.198134 3211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-855df8f485-kwb2l" Jul 2 00:25:14.198212 kubelet[3211]: E0702 00:25:14.198178 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-855df8f485-kwb2l_calico-system(83ee114b-f9bf-4eb9-8231-c5444d239bae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-855df8f485-kwb2l_calico-system(83ee114b-f9bf-4eb9-8231-c5444d239bae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855df8f485-kwb2l" podUID="83ee114b-f9bf-4eb9-8231-c5444d239bae" Jul 2 00:25:14.211810 containerd[1736]: time="2024-07-02T00:25:14.211754927Z" level=error msg="Failed to destroy network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.212173 containerd[1736]: time="2024-07-02T00:25:14.212132206Z" level=error msg="encountered an error cleaning up failed sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.212227 containerd[1736]: time="2024-07-02T00:25:14.212200046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rbpz,Uid:4e7e441b-83c6-45ca-9941-b2bb675b47a8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.212583 kubelet[3211]: E0702 00:25:14.212424 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.212583 kubelet[3211]: E0702 00:25:14.212479 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8rbpz" Jul 2 00:25:14.212583 kubelet[3211]: E0702 00:25:14.212500 3211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8rbpz" Jul 2 00:25:14.212763 kubelet[3211]: E0702 00:25:14.212545 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8rbpz_kube-system(4e7e441b-83c6-45ca-9941-b2bb675b47a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8rbpz_kube-system(4e7e441b-83c6-45ca-9941-b2bb675b47a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8rbpz" podUID="4e7e441b-83c6-45ca-9941-b2bb675b47a8" Jul 2 00:25:14.699928 systemd[1]: Created slice kubepods-besteffort-pod49c41bcc_2760_45cb_87bc_55a1cf3e0250.slice - libcontainer container kubepods-besteffort-pod49c41bcc_2760_45cb_87bc_55a1cf3e0250.slice. Jul 2 00:25:14.702603 containerd[1736]: time="2024-07-02T00:25:14.702541370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pw44m,Uid:49c41bcc-2760-45cb-87bc-55a1cf3e0250,Namespace:calico-system,Attempt:0,}" Jul 2 00:25:14.776575 containerd[1736]: time="2024-07-02T00:25:14.776408794Z" level=error msg="Failed to destroy network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.776806 containerd[1736]: time="2024-07-02T00:25:14.776767553Z" level=error msg="encountered an error cleaning up failed sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.776844 containerd[1736]: time="2024-07-02T00:25:14.776819833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pw44m,Uid:49c41bcc-2760-45cb-87bc-55a1cf3e0250,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.777143 kubelet[3211]: E0702 00:25:14.777093 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.777222 kubelet[3211]: E0702 00:25:14.777164 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pw44m" Jul 2 00:25:14.777222 kubelet[3211]: E0702 00:25:14.777189 3211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pw44m" Jul 2 00:25:14.777272 kubelet[3211]: E0702 00:25:14.777229 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pw44m_calico-system(49c41bcc-2760-45cb-87bc-55a1cf3e0250)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pw44m_calico-system(49c41bcc-2760-45cb-87bc-55a1cf3e0250)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:14.812191 kubelet[3211]: I0702 00:25:14.812153 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:14.813730 containerd[1736]: time="2024-07-02T00:25:14.813690885Z" level=info msg="StopPodSandbox for \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\"" Jul 2 00:25:14.815241 containerd[1736]: time="2024-07-02T00:25:14.814642722Z" level=info msg="Ensure that sandbox 3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c in task-service has been cleanup successfully" Jul 2 00:25:14.815356 kubelet[3211]: I0702 00:25:14.814856 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:14.816489 containerd[1736]: time="2024-07-02T00:25:14.815922438Z" level=info msg="StopPodSandbox for \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\"" Jul 2 00:25:14.817399 containerd[1736]: time="2024-07-02T00:25:14.816543996Z" level=info msg="Ensure that sandbox 155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22 in task-service has been cleanup successfully" Jul 2 00:25:14.822979 containerd[1736]: time="2024-07-02T00:25:14.822854698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:25:14.825651 kubelet[3211]: I0702 00:25:14.825604 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:14.829360 containerd[1736]: time="2024-07-02T00:25:14.827051245Z" level=info msg="StopPodSandbox for \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\"" Jul 2 00:25:14.829360 containerd[1736]: time="2024-07-02T00:25:14.827270485Z" level=info msg="Ensure that sandbox e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03 in task-service has been cleanup successfully" Jul 2 00:25:14.833203 kubelet[3211]: I0702 00:25:14.833168 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:14.834542 containerd[1736]: time="2024-07-02T00:25:14.834504824Z" level=info msg="StopPodSandbox for \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\"" Jul 2 00:25:14.836933 containerd[1736]: time="2024-07-02T00:25:14.836888937Z" level=info msg="Ensure that sandbox ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077 in task-service has been cleanup successfully" Jul 2 00:25:14.887666 containerd[1736]: time="2024-07-02T00:25:14.887618868Z" level=error msg="StopPodSandbox for \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\" failed" error="failed to destroy network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.888066 kubelet[3211]: E0702 00:25:14.888029 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:14.888233 kubelet[3211]: E0702 00:25:14.888189 3211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c"} Jul 2 00:25:14.888314 kubelet[3211]: E0702 00:25:14.888299 3211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49c41bcc-2760-45cb-87bc-55a1cf3e0250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:14.888449 kubelet[3211]: E0702 00:25:14.888406 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49c41bcc-2760-45cb-87bc-55a1cf3e0250\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pw44m" podUID="49c41bcc-2760-45cb-87bc-55a1cf3e0250" Jul 2 00:25:14.896957 containerd[1736]: time="2024-07-02T00:25:14.896855481Z" level=error msg="StopPodSandbox for \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\" failed" error="failed to destroy network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.897421 kubelet[3211]: E0702 00:25:14.897283 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:14.897421 kubelet[3211]: E0702 00:25:14.897332 3211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22"} Jul 2 00:25:14.897421 kubelet[3211]: E0702 00:25:14.897366 3211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e7e441b-83c6-45ca-9941-b2bb675b47a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:14.897421 kubelet[3211]: E0702 00:25:14.897391 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e7e441b-83c6-45ca-9941-b2bb675b47a8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8rbpz" podUID="4e7e441b-83c6-45ca-9941-b2bb675b47a8" Jul 2 00:25:14.906471 containerd[1736]: time="2024-07-02T00:25:14.905181497Z" level=error msg="StopPodSandbox for \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\" failed" error="failed to destroy network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.906605 kubelet[3211]: E0702 00:25:14.905473 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:14.906605 kubelet[3211]: E0702 00:25:14.905524 3211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03"} Jul 2 00:25:14.906805 kubelet[3211]: E0702 00:25:14.906734 3211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dba33db4-da4e-40f4-abe2-61c91ad79197\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:14.906805 kubelet[3211]: E0702 00:25:14.906773 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dba33db4-da4e-40f4-abe2-61c91ad79197\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-xkfm7" podUID="dba33db4-da4e-40f4-abe2-61c91ad79197" Jul 2 00:25:14.909545 containerd[1736]: time="2024-07-02T00:25:14.909503284Z" level=error msg="StopPodSandbox for \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\" failed" error="failed to destroy network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:25:14.909951 kubelet[3211]: E0702 00:25:14.909914 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:14.910140 kubelet[3211]: E0702 00:25:14.910052 3211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077"} Jul 2 00:25:14.910140 kubelet[3211]: E0702 00:25:14.910089 3211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83ee114b-f9bf-4eb9-8231-c5444d239bae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:25:14.910140 kubelet[3211]: E0702 00:25:14.910110 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83ee114b-f9bf-4eb9-8231-c5444d239bae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-855df8f485-kwb2l" podUID="83ee114b-f9bf-4eb9-8231-c5444d239bae" Jul 2 00:25:15.090553 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22-shm.mount: Deactivated successfully. Jul 2 00:25:15.091864 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03-shm.mount: Deactivated successfully. Jul 2 00:25:15.091920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077-shm.mount: Deactivated successfully. Jul 2 00:25:20.280752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount652486961.mount: Deactivated successfully. Jul 2 00:25:20.983277 containerd[1736]: time="2024-07-02T00:25:20.983215123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:20.989597 containerd[1736]: time="2024-07-02T00:25:20.989524984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jul 2 00:25:20.995139 containerd[1736]: time="2024-07-02T00:25:20.995070166Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:20.998724 containerd[1736]: time="2024-07-02T00:25:20.998641475Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:20.999952 containerd[1736]: time="2024-07-02T00:25:20.999810151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 6.176663135s" Jul 2 00:25:20.999952 containerd[1736]: time="2024-07-02T00:25:20.999851831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jul 2 00:25:21.015815 containerd[1736]: time="2024-07-02T00:25:21.015770742Z" level=info msg="CreateContainer within sandbox \"27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:25:21.046275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2928493440.mount: Deactivated successfully. Jul 2 00:25:21.057104 containerd[1736]: time="2024-07-02T00:25:21.057050053Z" level=info msg="CreateContainer within sandbox \"27e0087c8a1db287684978c0a520daea8bf62e74ab20ea8327cef7084cb28428\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e1b0532b9b19f61441dec67af1f8fd6b23b2369161baeabc4b03a7d973c9aaeb\"" Jul 2 00:25:21.058462 containerd[1736]: time="2024-07-02T00:25:21.058424969Z" level=info msg="StartContainer for \"e1b0532b9b19f61441dec67af1f8fd6b23b2369161baeabc4b03a7d973c9aaeb\"" Jul 2 00:25:21.081749 systemd[1]: Started cri-containerd-e1b0532b9b19f61441dec67af1f8fd6b23b2369161baeabc4b03a7d973c9aaeb.scope - libcontainer container e1b0532b9b19f61441dec67af1f8fd6b23b2369161baeabc4b03a7d973c9aaeb. Jul 2 00:25:21.114539 containerd[1736]: time="2024-07-02T00:25:21.114270115Z" level=info msg="StartContainer for \"e1b0532b9b19f61441dec67af1f8fd6b23b2369161baeabc4b03a7d973c9aaeb\" returns successfully" Jul 2 00:25:21.539862 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:25:21.540015 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:25:27.694452 containerd[1736]: time="2024-07-02T00:25:27.693384056Z" level=info msg="StopPodSandbox for \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\"" Jul 2 00:25:27.694886 containerd[1736]: time="2024-07-02T00:25:27.693401456Z" level=info msg="StopPodSandbox for \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\"" Jul 2 00:25:27.695409 containerd[1736]: time="2024-07-02T00:25:27.693384616Z" level=info msg="StopPodSandbox for \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\"" Jul 2 00:25:27.763442 kubelet[3211]: I0702 00:25:27.763366 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fqj4x" podStartSLOduration=7.47060061 podStartE2EDuration="26.763349163s" podCreationTimestamp="2024-07-02 00:25:01 +0000 UTC" firstStartedPulling="2024-07-02 00:25:01.708214075 +0000 UTC m=+23.126226525" lastFinishedPulling="2024-07-02 00:25:21.000962628 +0000 UTC m=+42.418975078" observedRunningTime="2024-07-02 00:25:21.871469757 +0000 UTC m=+43.289482207" watchObservedRunningTime="2024-07-02 00:25:27.763349163 +0000 UTC m=+49.181361613" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.767 [INFO][4483] k8s.go 608: Cleaning up netns ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.767 [INFO][4483] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" iface="eth0" netns="/var/run/netns/cni-7796f9d9-e688-8cdd-f97a-db6244c3b6f7" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.768 [INFO][4483] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" iface="eth0" netns="/var/run/netns/cni-7796f9d9-e688-8cdd-f97a-db6244c3b6f7" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.768 [INFO][4483] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" iface="eth0" netns="/var/run/netns/cni-7796f9d9-e688-8cdd-f97a-db6244c3b6f7" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.768 [INFO][4483] k8s.go 615: Releasing IP address(es) ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.768 [INFO][4483] utils.go 188: Calico CNI releasing IP address ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.815 [INFO][4499] ipam_plugin.go 411: Releasing address using handleID ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.815 [INFO][4499] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.815 [INFO][4499] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.827 [WARNING][4499] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.828 [INFO][4499] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.829 [INFO][4499] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:27.832627 containerd[1736]: 2024-07-02 00:25:27.830 [INFO][4483] k8s.go 621: Teardown processing complete. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:27.835925 containerd[1736]: time="2024-07-02T00:25:27.832790112Z" level=info msg="TearDown network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\" successfully" Jul 2 00:25:27.835925 containerd[1736]: time="2024-07-02T00:25:27.832820072Z" level=info msg="StopPodSandbox for \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\" returns successfully" Jul 2 00:25:27.835143 systemd[1]: run-netns-cni\x2d7796f9d9\x2de688\x2d8cdd\x2df97a\x2ddb6244c3b6f7.mount: Deactivated successfully. Jul 2 00:25:27.837410 containerd[1736]: time="2024-07-02T00:25:27.837027340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xkfm7,Uid:dba33db4-da4e-40f4-abe2-61c91ad79197,Namespace:kube-system,Attempt:1,}" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.791 [INFO][4482] k8s.go 608: Cleaning up netns ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.792 [INFO][4482] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" iface="eth0" netns="/var/run/netns/cni-a6fadb29-39cb-bf9f-0f71-25aef5aaa197" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.792 [INFO][4482] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" iface="eth0" netns="/var/run/netns/cni-a6fadb29-39cb-bf9f-0f71-25aef5aaa197" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.792 [INFO][4482] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" iface="eth0" netns="/var/run/netns/cni-a6fadb29-39cb-bf9f-0f71-25aef5aaa197" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.792 [INFO][4482] k8s.go 615: Releasing IP address(es) ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.792 [INFO][4482] utils.go 188: Calico CNI releasing IP address ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.817 [INFO][4508] ipam_plugin.go 411: Releasing address using handleID ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.817 [INFO][4508] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.829 [INFO][4508] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.841 [WARNING][4508] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.841 [INFO][4508] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.842 [INFO][4508] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:27.846604 containerd[1736]: 2024-07-02 00:25:27.845 [INFO][4482] k8s.go 621: Teardown processing complete. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:27.849492 systemd[1]: run-netns-cni\x2da6fadb29\x2d39cb\x2dbf9f\x2d0f71\x2d25aef5aaa197.mount: Deactivated successfully. Jul 2 00:25:27.851840 containerd[1736]: time="2024-07-02T00:25:27.851779015Z" level=info msg="TearDown network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\" successfully" Jul 2 00:25:27.851840 containerd[1736]: time="2024-07-02T00:25:27.851828815Z" level=info msg="StopPodSandbox for \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\" returns successfully" Jul 2 00:25:27.853099 containerd[1736]: time="2024-07-02T00:25:27.853064251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855df8f485-kwb2l,Uid:83ee114b-f9bf-4eb9-8231-c5444d239bae,Namespace:calico-system,Attempt:1,}" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.783 [INFO][4487] k8s.go 608: Cleaning up netns ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.784 [INFO][4487] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" iface="eth0" netns="/var/run/netns/cni-9a5278dd-13a7-76a7-f6a5-0ef2e0204ff2" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.784 [INFO][4487] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" iface="eth0" netns="/var/run/netns/cni-9a5278dd-13a7-76a7-f6a5-0ef2e0204ff2" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.785 [INFO][4487] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" iface="eth0" netns="/var/run/netns/cni-9a5278dd-13a7-76a7-f6a5-0ef2e0204ff2" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.785 [INFO][4487] k8s.go 615: Releasing IP address(es) ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.785 [INFO][4487] utils.go 188: Calico CNI releasing IP address ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.825 [INFO][4504] ipam_plugin.go 411: Releasing address using handleID ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.826 [INFO][4504] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.842 [INFO][4504] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.857 [WARNING][4504] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.857 [INFO][4504] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.859 [INFO][4504] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:27.862761 containerd[1736]: 2024-07-02 00:25:27.861 [INFO][4487] k8s.go 621: Teardown processing complete. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:27.865527 containerd[1736]: time="2024-07-02T00:25:27.862906341Z" level=info msg="TearDown network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\" successfully" Jul 2 00:25:27.865527 containerd[1736]: time="2024-07-02T00:25:27.862936541Z" level=info msg="StopPodSandbox for \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\" returns successfully" Jul 2 00:25:27.865527 containerd[1736]: time="2024-07-02T00:25:27.864070777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pw44m,Uid:49c41bcc-2760-45cb-87bc-55a1cf3e0250,Namespace:calico-system,Attempt:1,}" Jul 2 00:25:27.866055 systemd[1]: run-netns-cni\x2d9a5278dd\x2d13a7\x2d76a7\x2df6a5\x2d0ef2e0204ff2.mount: Deactivated successfully. Jul 2 00:25:28.090283 systemd-networkd[1552]: cali138d684fd15: Link UP Jul 2 00:25:28.091023 systemd-networkd[1552]: cali138d684fd15: Gained carrier Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:27.955 [INFO][4520] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:27.973 [INFO][4520] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0 coredns-7db6d8ff4d- kube-system dba33db4-da4e-40f4-abe2-61c91ad79197 720 0 2024-07-02 00:24:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-3e8d94ffa6 coredns-7db6d8ff4d-xkfm7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali138d684fd15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:27.973 [INFO][4520] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.016 [INFO][4554] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" HandleID="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.034 [INFO][4554] ipam_plugin.go 264: Auto assigning IP ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" HandleID="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289ed0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-3e8d94ffa6", "pod":"coredns-7db6d8ff4d-xkfm7", "timestamp":"2024-07-02 00:25:28.016585234 +0000 UTC"}, Hostname:"ci-3975.1.1-a-3e8d94ffa6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.035 [INFO][4554] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.035 [INFO][4554] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.035 [INFO][4554] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-3e8d94ffa6' Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.037 [INFO][4554] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.043 [INFO][4554] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.049 [INFO][4554] ipam.go 489: Trying affinity for 192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.051 [INFO][4554] ipam.go 155: Attempting to load block cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.054 [INFO][4554] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.054 [INFO][4554] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.056 [INFO][4554] ipam.go 1685: Creating new handle: k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.071 [INFO][4554] ipam.go 1203: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.080 [INFO][4554] ipam.go 1216: Successfully claimed IPs: [192.168.89.65/26] block=192.168.89.64/26 handle="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.080 [INFO][4554] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.65/26] handle="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.080 [INFO][4554] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:28.110130 containerd[1736]: 2024-07-02 00:25:28.080 [INFO][4554] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.89.65/26] IPv6=[] ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" HandleID="k8s-pod-network.956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:28.110828 containerd[1736]: 2024-07-02 00:25:28.082 [INFO][4520] k8s.go 386: Populated endpoint ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dba33db4-da4e-40f4-abe2-61c91ad79197", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"", Pod:"coredns-7db6d8ff4d-xkfm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali138d684fd15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:28.110828 containerd[1736]: 2024-07-02 00:25:28.083 [INFO][4520] k8s.go 387: Calico CNI using IPs: [192.168.89.65/32] ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:28.110828 containerd[1736]: 2024-07-02 00:25:28.083 [INFO][4520] dataplane_linux.go 68: Setting the host side veth name to cali138d684fd15 ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:28.110828 containerd[1736]: 2024-07-02 00:25:28.091 [INFO][4520] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:28.110828 containerd[1736]: 2024-07-02 00:25:28.093 [INFO][4520] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dba33db4-da4e-40f4-abe2-61c91ad79197", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd", Pod:"coredns-7db6d8ff4d-xkfm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali138d684fd15", MAC:"3e:57:be:49:95:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:28.110828 containerd[1736]: 2024-07-02 00:25:28.107 [INFO][4520] k8s.go 500: Wrote updated endpoint to datastore ContainerID="956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-xkfm7" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:28.136476 systemd-networkd[1552]: calie6b74e8f691: Link UP Jul 2 00:25:28.141319 systemd-networkd[1552]: calie6b74e8f691: Gained carrier Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:27.978 [INFO][4531] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:27.997 [INFO][4531] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0 csi-node-driver- calico-system 49c41bcc-2760-45cb-87bc-55a1cf3e0250 721 0 2024-07-02 00:25:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.1.1-a-3e8d94ffa6 csi-node-driver-pw44m eth0 default [] [] [kns.calico-system ksa.calico-system.default] calie6b74e8f691 [] []}} ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:27.997 [INFO][4531] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.054 [INFO][4562] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" HandleID="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.072 [INFO][4562] ipam_plugin.go 264: Auto assigning IP ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" HandleID="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000611630), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-3e8d94ffa6", "pod":"csi-node-driver-pw44m", "timestamp":"2024-07-02 00:25:28.054639398 +0000 UTC"}, Hostname:"ci-3975.1.1-a-3e8d94ffa6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.073 [INFO][4562] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.080 [INFO][4562] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.080 [INFO][4562] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-3e8d94ffa6' Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.083 [INFO][4562] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.088 [INFO][4562] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.099 [INFO][4562] ipam.go 489: Trying affinity for 192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.103 [INFO][4562] ipam.go 155: Attempting to load block cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.115 [INFO][4562] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.115 [INFO][4562] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.118 [INFO][4562] ipam.go 1685: Creating new handle: k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.124 [INFO][4562] ipam.go 1203: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.130 [INFO][4562] ipam.go 1216: Successfully claimed IPs: [192.168.89.66/26] block=192.168.89.64/26 handle="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.130 [INFO][4562] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.66/26] handle="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.130 [INFO][4562] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:28.160240 containerd[1736]: 2024-07-02 00:25:28.130 [INFO][4562] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.89.66/26] IPv6=[] ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" HandleID="k8s-pod-network.feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:28.162326 containerd[1736]: 2024-07-02 00:25:28.132 [INFO][4531] k8s.go 386: Populated endpoint ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49c41bcc-2760-45cb-87bc-55a1cf3e0250", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"", Pod:"csi-node-driver-pw44m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie6b74e8f691", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:28.162326 containerd[1736]: 2024-07-02 00:25:28.133 [INFO][4531] k8s.go 387: Calico CNI using IPs: [192.168.89.66/32] ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:28.162326 containerd[1736]: 2024-07-02 00:25:28.133 [INFO][4531] dataplane_linux.go 68: Setting the host side veth name to calie6b74e8f691 ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:28.162326 containerd[1736]: 2024-07-02 00:25:28.142 [INFO][4531] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:28.162326 containerd[1736]: 2024-07-02 00:25:28.145 [INFO][4531] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49c41bcc-2760-45cb-87bc-55a1cf3e0250", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af", Pod:"csi-node-driver-pw44m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie6b74e8f691", MAC:"8a:bd:39:f4:7f:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:28.162326 containerd[1736]: 2024-07-02 00:25:28.157 [INFO][4531] k8s.go 500: Wrote updated endpoint to datastore ContainerID="feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af" Namespace="calico-system" Pod="csi-node-driver-pw44m" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:28.167445 containerd[1736]: time="2024-07-02T00:25:28.166676378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:28.167854 containerd[1736]: time="2024-07-02T00:25:28.167752295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.168975 containerd[1736]: time="2024-07-02T00:25:28.168417853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:28.168975 containerd[1736]: time="2024-07-02T00:25:28.168437853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.207838 systemd[1]: Started cri-containerd-956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd.scope - libcontainer container 956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd. Jul 2 00:25:28.216397 systemd-networkd[1552]: cali3a4608a19b0: Link UP Jul 2 00:25:28.217119 systemd-networkd[1552]: cali3a4608a19b0: Gained carrier Jul 2 00:25:28.234140 containerd[1736]: time="2024-07-02T00:25:28.230873023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:28.234140 containerd[1736]: time="2024-07-02T00:25:28.232505258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.234140 containerd[1736]: time="2024-07-02T00:25:28.232527498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:28.234140 containerd[1736]: time="2024-07-02T00:25:28.232537618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:27.990 [INFO][4545] utils.go 100: File /var/lib/calico/mtu does not exist Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.014 [INFO][4545] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0 calico-kube-controllers-855df8f485- calico-system 83ee114b-f9bf-4eb9-8231-c5444d239bae 722 0 2024-07-02 00:25:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:855df8f485 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.1.1-a-3e8d94ffa6 calico-kube-controllers-855df8f485-kwb2l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3a4608a19b0 [] []}} ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.014 [INFO][4545] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.067 [INFO][4566] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" HandleID="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.078 [INFO][4566] ipam_plugin.go 264: Auto assigning IP ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" HandleID="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316770), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.1.1-a-3e8d94ffa6", "pod":"calico-kube-controllers-855df8f485-kwb2l", "timestamp":"2024-07-02 00:25:28.067043641 +0000 UTC"}, Hostname:"ci-3975.1.1-a-3e8d94ffa6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.078 [INFO][4566] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.130 [INFO][4566] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.130 [INFO][4566] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-3e8d94ffa6' Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.137 [INFO][4566] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.150 [INFO][4566] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.163 [INFO][4566] ipam.go 489: Trying affinity for 192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.168 [INFO][4566] ipam.go 155: Attempting to load block cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.173 [INFO][4566] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.173 [INFO][4566] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.178 [INFO][4566] ipam.go 1685: Creating new handle: k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.185 [INFO][4566] ipam.go 1203: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.192 [INFO][4566] ipam.go 1216: Successfully claimed IPs: [192.168.89.67/26] block=192.168.89.64/26 handle="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.193 [INFO][4566] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.67/26] handle="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.193 [INFO][4566] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:28.241210 containerd[1736]: 2024-07-02 00:25:28.193 [INFO][4566] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.89.67/26] IPv6=[] ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" HandleID="k8s-pod-network.3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:28.241857 containerd[1736]: 2024-07-02 00:25:28.200 [INFO][4545] k8s.go 386: Populated endpoint ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0", GenerateName:"calico-kube-controllers-855df8f485-", Namespace:"calico-system", SelfLink:"", UID:"83ee114b-f9bf-4eb9-8231-c5444d239bae", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855df8f485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"", Pod:"calico-kube-controllers-855df8f485-kwb2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a4608a19b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:28.241857 containerd[1736]: 2024-07-02 00:25:28.200 [INFO][4545] k8s.go 387: Calico CNI using IPs: [192.168.89.67/32] ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:28.241857 containerd[1736]: 2024-07-02 00:25:28.200 [INFO][4545] dataplane_linux.go 68: Setting the host side veth name to cali3a4608a19b0 ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:28.241857 containerd[1736]: 2024-07-02 00:25:28.216 [INFO][4545] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:28.241857 containerd[1736]: 2024-07-02 00:25:28.218 [INFO][4545] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0", GenerateName:"calico-kube-controllers-855df8f485-", Namespace:"calico-system", SelfLink:"", UID:"83ee114b-f9bf-4eb9-8231-c5444d239bae", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855df8f485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea", Pod:"calico-kube-controllers-855df8f485-kwb2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a4608a19b0", MAC:"e6:0e:a9:c0:be:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:28.241857 containerd[1736]: 2024-07-02 00:25:28.238 [INFO][4545] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea" Namespace="calico-system" Pod="calico-kube-controllers-855df8f485-kwb2l" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:28.269304 systemd[1]: Started cri-containerd-feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af.scope - libcontainer container feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af. Jul 2 00:25:28.303292 containerd[1736]: time="2024-07-02T00:25:28.302204646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:28.303292 containerd[1736]: time="2024-07-02T00:25:28.303172563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.303452 containerd[1736]: time="2024-07-02T00:25:28.303326483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:28.304101 containerd[1736]: time="2024-07-02T00:25:28.303540962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:28.307672 containerd[1736]: time="2024-07-02T00:25:28.307440870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xkfm7,Uid:dba33db4-da4e-40f4-abe2-61c91ad79197,Namespace:kube-system,Attempt:1,} returns sandbox id \"956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd\"" Jul 2 00:25:28.322796 containerd[1736]: time="2024-07-02T00:25:28.322737864Z" level=info msg="CreateContainer within sandbox \"956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:25:28.344521 containerd[1736]: time="2024-07-02T00:25:28.344171759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pw44m,Uid:49c41bcc-2760-45cb-87bc-55a1cf3e0250,Namespace:calico-system,Attempt:1,} returns sandbox id \"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af\"" Jul 2 00:25:28.357026 containerd[1736]: time="2024-07-02T00:25:28.356893920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:25:28.364812 systemd[1]: Started cri-containerd-3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea.scope - libcontainer container 3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea. Jul 2 00:25:28.377982 containerd[1736]: time="2024-07-02T00:25:28.377925496Z" level=info msg="CreateContainer within sandbox \"956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6735c79a071c793ad6d6b13798915738d7894552c1e06caad800c40db9c4d9f1\"" Jul 2 00:25:28.380152 containerd[1736]: time="2024-07-02T00:25:28.378590774Z" level=info msg="StartContainer for \"6735c79a071c793ad6d6b13798915738d7894552c1e06caad800c40db9c4d9f1\"" Jul 2 00:25:28.444751 systemd[1]: Started cri-containerd-6735c79a071c793ad6d6b13798915738d7894552c1e06caad800c40db9c4d9f1.scope - libcontainer container 6735c79a071c793ad6d6b13798915738d7894552c1e06caad800c40db9c4d9f1. Jul 2 00:25:28.456387 containerd[1736]: time="2024-07-02T00:25:28.456338258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-855df8f485-kwb2l,Uid:83ee114b-f9bf-4eb9-8231-c5444d239bae,Namespace:calico-system,Attempt:1,} returns sandbox id \"3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea\"" Jul 2 00:25:28.494140 containerd[1736]: time="2024-07-02T00:25:28.494086423Z" level=info msg="StartContainer for \"6735c79a071c793ad6d6b13798915738d7894552c1e06caad800c40db9c4d9f1\" returns successfully" Jul 2 00:25:28.887670 kubelet[3211]: I0702 00:25:28.886908 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xkfm7" podStartSLOduration=34.88689075 podStartE2EDuration="34.88689075s" podCreationTimestamp="2024-07-02 00:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:28.874263548 +0000 UTC m=+50.292275998" watchObservedRunningTime="2024-07-02 00:25:28.88689075 +0000 UTC m=+50.304903200" Jul 2 00:25:29.476746 systemd-networkd[1552]: calie6b74e8f691: Gained IPv6LL Jul 2 00:25:29.704893 containerd[1736]: time="2024-07-02T00:25:29.704660465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:29.708643 containerd[1736]: time="2024-07-02T00:25:29.708606613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jul 2 00:25:29.715984 containerd[1736]: time="2024-07-02T00:25:29.715915151Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:29.723688 containerd[1736]: time="2024-07-02T00:25:29.723615768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:29.724819 containerd[1736]: time="2024-07-02T00:25:29.724677044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.367721765s" Jul 2 00:25:29.724819 containerd[1736]: time="2024-07-02T00:25:29.724715524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jul 2 00:25:29.725924 containerd[1736]: time="2024-07-02T00:25:29.725833321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 00:25:29.727530 containerd[1736]: time="2024-07-02T00:25:29.727292997Z" level=info msg="CreateContainer within sandbox \"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:25:29.778141 containerd[1736]: time="2024-07-02T00:25:29.778048922Z" level=info msg="CreateContainer within sandbox \"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2ea112455649af16a05abeeec42ba124b1b8bf5725e92a550002ddddfbb3a707\"" Jul 2 00:25:29.779174 containerd[1736]: time="2024-07-02T00:25:29.778850360Z" level=info msg="StartContainer for \"2ea112455649af16a05abeeec42ba124b1b8bf5725e92a550002ddddfbb3a707\"" Jul 2 00:25:29.796781 systemd-networkd[1552]: cali3a4608a19b0: Gained IPv6LL Jul 2 00:25:29.813813 systemd[1]: Started cri-containerd-2ea112455649af16a05abeeec42ba124b1b8bf5725e92a550002ddddfbb3a707.scope - libcontainer container 2ea112455649af16a05abeeec42ba124b1b8bf5725e92a550002ddddfbb3a707. Jul 2 00:25:29.842693 containerd[1736]: time="2024-07-02T00:25:29.842601686Z" level=info msg="StartContainer for \"2ea112455649af16a05abeeec42ba124b1b8bf5725e92a550002ddddfbb3a707\" returns successfully" Jul 2 00:25:29.988704 systemd-networkd[1552]: cali138d684fd15: Gained IPv6LL Jul 2 00:25:30.023890 kubelet[3211]: I0702 00:25:30.023062 3211 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:25:30.478761 systemd-networkd[1552]: vxlan.calico: Link UP Jul 2 00:25:30.478768 systemd-networkd[1552]: vxlan.calico: Gained carrier Jul 2 00:25:30.697572 containerd[1736]: time="2024-07-02T00:25:30.697217930Z" level=info msg="StopPodSandbox for \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\"" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.786 [INFO][4989] k8s.go 608: Cleaning up netns ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.786 [INFO][4989] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" iface="eth0" netns="/var/run/netns/cni-808e7697-4ea3-126d-9b9e-9bc9358ba5a3" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.786 [INFO][4989] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" iface="eth0" netns="/var/run/netns/cni-808e7697-4ea3-126d-9b9e-9bc9358ba5a3" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.787 [INFO][4989] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" iface="eth0" netns="/var/run/netns/cni-808e7697-4ea3-126d-9b9e-9bc9358ba5a3" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.787 [INFO][4989] k8s.go 615: Releasing IP address(es) ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.787 [INFO][4989] utils.go 188: Calico CNI releasing IP address ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.807 [INFO][5005] ipam_plugin.go 411: Releasing address using handleID ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.807 [INFO][5005] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.807 [INFO][5005] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.816 [WARNING][5005] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.816 [INFO][5005] ipam_plugin.go 439: Releasing address using workloadID ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.817 [INFO][5005] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:30.821105 containerd[1736]: 2024-07-02 00:25:30.819 [INFO][4989] k8s.go 621: Teardown processing complete. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:30.821105 containerd[1736]: time="2024-07-02T00:25:30.821005074Z" level=info msg="TearDown network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\" successfully" Jul 2 00:25:30.821105 containerd[1736]: time="2024-07-02T00:25:30.821038113Z" level=info msg="StopPodSandbox for \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\" returns successfully" Jul 2 00:25:30.823892 systemd[1]: run-netns-cni\x2d808e7697\x2d4ea3\x2d126d\x2d9b9e\x2d9bc9358ba5a3.mount: Deactivated successfully. Jul 2 00:25:30.825219 containerd[1736]: time="2024-07-02T00:25:30.825032381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rbpz,Uid:4e7e441b-83c6-45ca-9941-b2bb675b47a8,Namespace:kube-system,Attempt:1,}" Jul 2 00:25:30.975344 systemd-networkd[1552]: calie37433f59eb: Link UP Jul 2 00:25:30.976305 systemd-networkd[1552]: calie37433f59eb: Gained carrier Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.899 [INFO][5016] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0 coredns-7db6d8ff4d- kube-system 4e7e441b-83c6-45ca-9941-b2bb675b47a8 761 0 2024-07-02 00:24:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.1.1-a-3e8d94ffa6 coredns-7db6d8ff4d-8rbpz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie37433f59eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.899 [INFO][5016] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.929 [INFO][5023] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" HandleID="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.942 [INFO][5023] ipam_plugin.go 264: Auto assigning IP ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" HandleID="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316220), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.1.1-a-3e8d94ffa6", "pod":"coredns-7db6d8ff4d-8rbpz", "timestamp":"2024-07-02 00:25:30.929394984 +0000 UTC"}, Hostname:"ci-3975.1.1-a-3e8d94ffa6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.942 [INFO][5023] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.942 [INFO][5023] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.942 [INFO][5023] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-3e8d94ffa6' Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.944 [INFO][5023] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.949 [INFO][5023] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.954 [INFO][5023] ipam.go 489: Trying affinity for 192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.956 [INFO][5023] ipam.go 155: Attempting to load block cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.959 [INFO][5023] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.959 [INFO][5023] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.961 [INFO][5023] ipam.go 1685: Creating new handle: k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.965 [INFO][5023] ipam.go 1203: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.969 [INFO][5023] ipam.go 1216: Successfully claimed IPs: [192.168.89.68/26] block=192.168.89.64/26 handle="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.970 [INFO][5023] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.68/26] handle="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.970 [INFO][5023] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:30.993743 containerd[1736]: 2024-07-02 00:25:30.970 [INFO][5023] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.89.68/26] IPv6=[] ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" HandleID="k8s-pod-network.2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.994810 containerd[1736]: 2024-07-02 00:25:30.972 [INFO][5016] k8s.go 386: Populated endpoint ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e7e441b-83c6-45ca-9941-b2bb675b47a8", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"", Pod:"coredns-7db6d8ff4d-8rbpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie37433f59eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:30.994810 containerd[1736]: 2024-07-02 00:25:30.972 [INFO][5016] k8s.go 387: Calico CNI using IPs: [192.168.89.68/32] ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.994810 containerd[1736]: 2024-07-02 00:25:30.972 [INFO][5016] dataplane_linux.go 68: Setting the host side veth name to calie37433f59eb ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.994810 containerd[1736]: 2024-07-02 00:25:30.976 [INFO][5016] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:30.994810 containerd[1736]: 2024-07-02 00:25:30.977 [INFO][5016] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e7e441b-83c6-45ca-9941-b2bb675b47a8", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b", Pod:"coredns-7db6d8ff4d-8rbpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie37433f59eb", MAC:"72:f5:24:c3:48:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:30.994810 containerd[1736]: 2024-07-02 00:25:30.990 [INFO][5016] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8rbpz" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:31.022927 containerd[1736]: time="2024-07-02T00:25:31.022763901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:31.022927 containerd[1736]: time="2024-07-02T00:25:31.022833820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:31.022927 containerd[1736]: time="2024-07-02T00:25:31.022853340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:31.022927 containerd[1736]: time="2024-07-02T00:25:31.022883060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:31.051812 systemd[1]: Started cri-containerd-2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b.scope - libcontainer container 2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b. Jul 2 00:25:31.087342 containerd[1736]: time="2024-07-02T00:25:31.086718386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8rbpz,Uid:4e7e441b-83c6-45ca-9941-b2bb675b47a8,Namespace:kube-system,Attempt:1,} returns sandbox id \"2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b\"" Jul 2 00:25:31.093479 containerd[1736]: time="2024-07-02T00:25:31.093130007Z" level=info msg="CreateContainer within sandbox \"2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:25:31.128083 containerd[1736]: time="2024-07-02T00:25:31.127976741Z" level=info msg="CreateContainer within sandbox \"2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c944903b2b23ddd0e42ee5d5ab922719df142fbedfdc63f20e1b0b5114cfe57\"" Jul 2 00:25:31.128840 containerd[1736]: time="2024-07-02T00:25:31.128800938Z" level=info msg="StartContainer for \"1c944903b2b23ddd0e42ee5d5ab922719df142fbedfdc63f20e1b0b5114cfe57\"" Jul 2 00:25:31.155759 systemd[1]: Started cri-containerd-1c944903b2b23ddd0e42ee5d5ab922719df142fbedfdc63f20e1b0b5114cfe57.scope - libcontainer container 1c944903b2b23ddd0e42ee5d5ab922719df142fbedfdc63f20e1b0b5114cfe57. Jul 2 00:25:31.187227 containerd[1736]: time="2024-07-02T00:25:31.187158641Z" level=info msg="StartContainer for \"1c944903b2b23ddd0e42ee5d5ab922719df142fbedfdc63f20e1b0b5114cfe57\" returns successfully" Jul 2 00:25:31.893358 kubelet[3211]: I0702 00:25:31.893281 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8rbpz" podStartSLOduration=37.893259456 podStartE2EDuration="37.893259456s" podCreationTimestamp="2024-07-02 00:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:25:31.891423141 +0000 UTC m=+53.309435591" watchObservedRunningTime="2024-07-02 00:25:31.893259456 +0000 UTC m=+53.311271906" Jul 2 00:25:32.548682 systemd-networkd[1552]: vxlan.calico: Gained IPv6LL Jul 2 00:25:32.932713 systemd-networkd[1552]: calie37433f59eb: Gained IPv6LL Jul 2 00:25:35.786176 containerd[1736]: time="2024-07-02T00:25:35.786110539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:35.789164 containerd[1736]: time="2024-07-02T00:25:35.789117170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jul 2 00:25:35.792144 containerd[1736]: time="2024-07-02T00:25:35.792094401Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:35.796317 containerd[1736]: time="2024-07-02T00:25:35.796204389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:35.797050 containerd[1736]: time="2024-07-02T00:25:35.796903107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 6.071035866s" Jul 2 00:25:35.797050 containerd[1736]: time="2024-07-02T00:25:35.796943427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jul 2 00:25:35.801980 containerd[1736]: time="2024-07-02T00:25:35.801745452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:25:35.825132 containerd[1736]: time="2024-07-02T00:25:35.823457627Z" level=info msg="CreateContainer within sandbox \"3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 00:25:35.886590 containerd[1736]: time="2024-07-02T00:25:35.886528037Z" level=info msg="CreateContainer within sandbox \"3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0847ab5f040a0a4ae1d005887a187ffcb559ab07c1adfd9fbf1517a45b3b7e1e\"" Jul 2 00:25:35.894382 containerd[1736]: time="2024-07-02T00:25:35.894329054Z" level=info msg="StartContainer for \"0847ab5f040a0a4ae1d005887a187ffcb559ab07c1adfd9fbf1517a45b3b7e1e\"" Jul 2 00:25:35.927989 systemd[1]: Started cri-containerd-0847ab5f040a0a4ae1d005887a187ffcb559ab07c1adfd9fbf1517a45b3b7e1e.scope - libcontainer container 0847ab5f040a0a4ae1d005887a187ffcb559ab07c1adfd9fbf1517a45b3b7e1e. Jul 2 00:25:35.974256 containerd[1736]: time="2024-07-02T00:25:35.974167174Z" level=info msg="StartContainer for \"0847ab5f040a0a4ae1d005887a187ffcb559ab07c1adfd9fbf1517a45b3b7e1e\" returns successfully" Jul 2 00:25:36.916613 kubelet[3211]: I0702 00:25:36.916225 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-855df8f485-kwb2l" podStartSLOduration=28.578158764 podStartE2EDuration="35.916205221s" podCreationTimestamp="2024-07-02 00:25:01 +0000 UTC" firstStartedPulling="2024-07-02 00:25:28.460314766 +0000 UTC m=+49.878327216" lastFinishedPulling="2024-07-02 00:25:35.798361223 +0000 UTC m=+57.216373673" observedRunningTime="2024-07-02 00:25:36.913926828 +0000 UTC m=+58.331939278" watchObservedRunningTime="2024-07-02 00:25:36.916205221 +0000 UTC m=+58.334217631" Jul 2 00:25:38.162452 containerd[1736]: time="2024-07-02T00:25:38.162389753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:38.165561 containerd[1736]: time="2024-07-02T00:25:38.165512183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jul 2 00:25:38.169545 containerd[1736]: time="2024-07-02T00:25:38.169478732Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:38.175407 containerd[1736]: time="2024-07-02T00:25:38.175343514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:38.176214 containerd[1736]: time="2024-07-02T00:25:38.176084432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 2.37429906s" Jul 2 00:25:38.176214 containerd[1736]: time="2024-07-02T00:25:38.176121232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jul 2 00:25:38.179517 containerd[1736]: time="2024-07-02T00:25:38.179479421Z" level=info msg="CreateContainer within sandbox \"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:25:38.219967 containerd[1736]: time="2024-07-02T00:25:38.219915420Z" level=info msg="CreateContainer within sandbox \"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"10d6029b75e8e1bea66f3bc6cfc1d71b5058ff52b9cd80232843a436d80f1ef0\"" Jul 2 00:25:38.220930 containerd[1736]: time="2024-07-02T00:25:38.220812817Z" level=info msg="StartContainer for \"10d6029b75e8e1bea66f3bc6cfc1d71b5058ff52b9cd80232843a436d80f1ef0\"" Jul 2 00:25:38.251787 systemd[1]: Started cri-containerd-10d6029b75e8e1bea66f3bc6cfc1d71b5058ff52b9cd80232843a436d80f1ef0.scope - libcontainer container 10d6029b75e8e1bea66f3bc6cfc1d71b5058ff52b9cd80232843a436d80f1ef0. Jul 2 00:25:38.280599 containerd[1736]: time="2024-07-02T00:25:38.280393118Z" level=info msg="StartContainer for \"10d6029b75e8e1bea66f3bc6cfc1d71b5058ff52b9cd80232843a436d80f1ef0\" returns successfully" Jul 2 00:25:38.703621 containerd[1736]: time="2024-07-02T00:25:38.703485606Z" level=info msg="StopPodSandbox for \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\"" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.739 [WARNING][5282] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49c41bcc-2760-45cb-87bc-55a1cf3e0250", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af", Pod:"csi-node-driver-pw44m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie6b74e8f691", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.741 [INFO][5282] k8s.go 608: Cleaning up netns ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.741 [INFO][5282] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" iface="eth0" netns="" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.741 [INFO][5282] k8s.go 615: Releasing IP address(es) ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.741 [INFO][5282] utils.go 188: Calico CNI releasing IP address ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.769 [INFO][5288] ipam_plugin.go 411: Releasing address using handleID ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.770 [INFO][5288] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.770 [INFO][5288] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.779 [WARNING][5288] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.779 [INFO][5288] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.782 [INFO][5288] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:38.784669 containerd[1736]: 2024-07-02 00:25:38.783 [INFO][5282] k8s.go 621: Teardown processing complete. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.785894 containerd[1736]: time="2024-07-02T00:25:38.784709561Z" level=info msg="TearDown network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\" successfully" Jul 2 00:25:38.785894 containerd[1736]: time="2024-07-02T00:25:38.784736681Z" level=info msg="StopPodSandbox for \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\" returns successfully" Jul 2 00:25:38.785894 containerd[1736]: time="2024-07-02T00:25:38.785214560Z" level=info msg="RemovePodSandbox for \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\"" Jul 2 00:25:38.785894 containerd[1736]: time="2024-07-02T00:25:38.785245440Z" level=info msg="Forcibly stopping sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\"" Jul 2 00:25:38.795341 kubelet[3211]: I0702 00:25:38.795119 3211 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:25:38.795341 kubelet[3211]: I0702 00:25:38.795149 3211 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.833 [WARNING][5306] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"49c41bcc-2760-45cb-87bc-55a1cf3e0250", ResourceVersion:"729", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"feca0c8d2efc2b67861fd2d388095a05d59c939e33f020129a80b5d406b693af", Pod:"csi-node-driver-pw44m", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.89.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calie6b74e8f691", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.834 [INFO][5306] k8s.go 608: Cleaning up netns ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.834 [INFO][5306] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" iface="eth0" netns="" Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.834 [INFO][5306] k8s.go 615: Releasing IP address(es) ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.834 [INFO][5306] utils.go 188: Calico CNI releasing IP address ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.855 [INFO][5312] ipam_plugin.go 411: Releasing address using handleID ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.855 [INFO][5312] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.855 [INFO][5312] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.880 [WARNING][5312] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.880 [INFO][5312] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" HandleID="k8s-pod-network.3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-csi--node--driver--pw44m-eth0" Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.884 [INFO][5312] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:38.891290 containerd[1736]: 2024-07-02 00:25:38.888 [INFO][5306] k8s.go 621: Teardown processing complete. ContainerID="3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c" Jul 2 00:25:38.891290 containerd[1736]: time="2024-07-02T00:25:38.891280561Z" level=info msg="TearDown network for sandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\" successfully" Jul 2 00:25:38.906308 containerd[1736]: time="2024-07-02T00:25:38.906259196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:38.907462 containerd[1736]: time="2024-07-02T00:25:38.906329395Z" level=info msg="RemovePodSandbox \"3301e2d15050b5133bef2cb564c7fb23fc97003aa3bb8bb16818210de3c50f5c\" returns successfully" Jul 2 00:25:38.908670 containerd[1736]: time="2024-07-02T00:25:38.907813071Z" level=info msg="StopPodSandbox for \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\"" Jul 2 00:25:38.925951 kubelet[3211]: I0702 00:25:38.925267 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pw44m" podStartSLOduration=28.103642514 podStartE2EDuration="37.925248539s" podCreationTimestamp="2024-07-02 00:25:01 +0000 UTC" firstStartedPulling="2024-07-02 00:25:28.355455484 +0000 UTC m=+49.773467894" lastFinishedPulling="2024-07-02 00:25:38.177061469 +0000 UTC m=+59.595073919" observedRunningTime="2024-07-02 00:25:38.9247701 +0000 UTC m=+60.342782550" watchObservedRunningTime="2024-07-02 00:25:38.925248539 +0000 UTC m=+60.343260949" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.971 [WARNING][5333] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dba33db4-da4e-40f4-abe2-61c91ad79197", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd", Pod:"coredns-7db6d8ff4d-xkfm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali138d684fd15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.971 [INFO][5333] k8s.go 608: Cleaning up netns ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.971 [INFO][5333] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" iface="eth0" netns="" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.971 [INFO][5333] k8s.go 615: Releasing IP address(es) ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.972 [INFO][5333] utils.go 188: Calico CNI releasing IP address ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.993 [INFO][5339] ipam_plugin.go 411: Releasing address using handleID ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.993 [INFO][5339] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:38.993 [INFO][5339] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:39.014 [WARNING][5339] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:39.015 [INFO][5339] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:39.017 [INFO][5339] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:39.019896 containerd[1736]: 2024-07-02 00:25:39.018 [INFO][5333] k8s.go 621: Teardown processing complete. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.021779 containerd[1736]: time="2024-07-02T00:25:39.021608209Z" level=info msg="TearDown network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\" successfully" Jul 2 00:25:39.021779 containerd[1736]: time="2024-07-02T00:25:39.021643249Z" level=info msg="StopPodSandbox for \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\" returns successfully" Jul 2 00:25:39.022767 containerd[1736]: time="2024-07-02T00:25:39.022288007Z" level=info msg="RemovePodSandbox for \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\"" Jul 2 00:25:39.022767 containerd[1736]: time="2024-07-02T00:25:39.022321047Z" level=info msg="Forcibly stopping sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\"" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.068 [WARNING][5357] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dba33db4-da4e-40f4-abe2-61c91ad79197", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"956b8d711f309ea8545f7a598660310dbc63c3c9a03ec326b70c041430d781bd", Pod:"coredns-7db6d8ff4d-xkfm7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali138d684fd15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.068 [INFO][5357] k8s.go 608: Cleaning up netns ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.068 [INFO][5357] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" iface="eth0" netns="" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.068 [INFO][5357] k8s.go 615: Releasing IP address(es) ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.068 [INFO][5357] utils.go 188: Calico CNI releasing IP address ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.090 [INFO][5363] ipam_plugin.go 411: Releasing address using handleID ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.091 [INFO][5363] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.091 [INFO][5363] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.102 [WARNING][5363] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.102 [INFO][5363] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" HandleID="k8s-pod-network.e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--xkfm7-eth0" Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.104 [INFO][5363] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:39.107276 containerd[1736]: 2024-07-02 00:25:39.105 [INFO][5357] k8s.go 621: Teardown processing complete. ContainerID="e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03" Jul 2 00:25:39.108846 containerd[1736]: time="2024-07-02T00:25:39.107696550Z" level=info msg="TearDown network for sandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\" successfully" Jul 2 00:25:39.116957 containerd[1736]: time="2024-07-02T00:25:39.116907282Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:39.117099 containerd[1736]: time="2024-07-02T00:25:39.116990762Z" level=info msg="RemovePodSandbox \"e661bccd2c88a3b8fbe095cec063f864859dba3c42f74fe8f3c73fb87557ce03\" returns successfully" Jul 2 00:25:39.117766 containerd[1736]: time="2024-07-02T00:25:39.117471280Z" level=info msg="StopPodSandbox for \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\"" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.161 [WARNING][5382] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0", GenerateName:"calico-kube-controllers-855df8f485-", Namespace:"calico-system", SelfLink:"", UID:"83ee114b-f9bf-4eb9-8231-c5444d239bae", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855df8f485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea", Pod:"calico-kube-controllers-855df8f485-kwb2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a4608a19b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.162 [INFO][5382] k8s.go 608: Cleaning up netns ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.162 [INFO][5382] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" iface="eth0" netns="" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.162 [INFO][5382] k8s.go 615: Releasing IP address(es) ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.162 [INFO][5382] utils.go 188: Calico CNI releasing IP address ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.184 [INFO][5388] ipam_plugin.go 411: Releasing address using handleID ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.185 [INFO][5388] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.185 [INFO][5388] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.195 [WARNING][5388] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.195 [INFO][5388] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.197 [INFO][5388] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:39.200062 containerd[1736]: 2024-07-02 00:25:39.198 [INFO][5382] k8s.go 621: Teardown processing complete. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.201380 containerd[1736]: time="2024-07-02T00:25:39.200639870Z" level=info msg="TearDown network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\" successfully" Jul 2 00:25:39.201380 containerd[1736]: time="2024-07-02T00:25:39.200672350Z" level=info msg="StopPodSandbox for \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\" returns successfully" Jul 2 00:25:39.201727 containerd[1736]: time="2024-07-02T00:25:39.201687867Z" level=info msg="RemovePodSandbox for \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\"" Jul 2 00:25:39.201802 containerd[1736]: time="2024-07-02T00:25:39.201731147Z" level=info msg="Forcibly stopping sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\"" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.252 [WARNING][5407] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0", GenerateName:"calico-kube-controllers-855df8f485-", Namespace:"calico-system", SelfLink:"", UID:"83ee114b-f9bf-4eb9-8231-c5444d239bae", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 25, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"855df8f485", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"3860904d1ec8016216800728cdd98f2a33951d84c912e9e28151bbb664c08fea", Pod:"calico-kube-controllers-855df8f485-kwb2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3a4608a19b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.252 [INFO][5407] k8s.go 608: Cleaning up netns ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.252 [INFO][5407] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" iface="eth0" netns="" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.252 [INFO][5407] k8s.go 615: Releasing IP address(es) ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.252 [INFO][5407] utils.go 188: Calico CNI releasing IP address ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.271 [INFO][5415] ipam_plugin.go 411: Releasing address using handleID ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.271 [INFO][5415] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.271 [INFO][5415] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.279 [WARNING][5415] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.279 [INFO][5415] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" HandleID="k8s-pod-network.ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--kube--controllers--855df8f485--kwb2l-eth0" Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.281 [INFO][5415] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:39.284440 containerd[1736]: 2024-07-02 00:25:39.282 [INFO][5407] k8s.go 621: Teardown processing complete. ContainerID="ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077" Jul 2 00:25:39.284440 containerd[1736]: time="2024-07-02T00:25:39.284377898Z" level=info msg="TearDown network for sandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\" successfully" Jul 2 00:25:39.292467 containerd[1736]: time="2024-07-02T00:25:39.292415714Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:39.292630 containerd[1736]: time="2024-07-02T00:25:39.292499274Z" level=info msg="RemovePodSandbox \"ec842b2c8cd67cd1c6d1adcf155103578cf14d22489c80928fc1c60110b55077\" returns successfully" Jul 2 00:25:39.293290 containerd[1736]: time="2024-07-02T00:25:39.293043992Z" level=info msg="StopPodSandbox for \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\"" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.331 [WARNING][5433] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e7e441b-83c6-45ca-9941-b2bb675b47a8", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b", Pod:"coredns-7db6d8ff4d-8rbpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie37433f59eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.331 [INFO][5433] k8s.go 608: Cleaning up netns ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.331 [INFO][5433] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" iface="eth0" netns="" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.331 [INFO][5433] k8s.go 615: Releasing IP address(es) ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.331 [INFO][5433] utils.go 188: Calico CNI releasing IP address ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.352 [INFO][5439] ipam_plugin.go 411: Releasing address using handleID ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.352 [INFO][5439] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.352 [INFO][5439] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.361 [WARNING][5439] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.361 [INFO][5439] ipam_plugin.go 439: Releasing address using workloadID ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.362 [INFO][5439] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:39.365483 containerd[1736]: 2024-07-02 00:25:39.364 [INFO][5433] k8s.go 621: Teardown processing complete. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.366733 containerd[1736]: time="2024-07-02T00:25:39.365532774Z" level=info msg="TearDown network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\" successfully" Jul 2 00:25:39.366733 containerd[1736]: time="2024-07-02T00:25:39.365577054Z" level=info msg="StopPodSandbox for \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\" returns successfully" Jul 2 00:25:39.366733 containerd[1736]: time="2024-07-02T00:25:39.366012853Z" level=info msg="RemovePodSandbox for \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\"" Jul 2 00:25:39.366733 containerd[1736]: time="2024-07-02T00:25:39.366046413Z" level=info msg="Forcibly stopping sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\"" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.402 [WARNING][5457] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4e7e441b-83c6-45ca-9941-b2bb675b47a8", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"2095e3a9677c39adea1041a41e5f7c0d40b444b79f9bcba74bdddafadf36a27b", Pod:"coredns-7db6d8ff4d-8rbpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie37433f59eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.402 [INFO][5457] k8s.go 608: Cleaning up netns ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.402 [INFO][5457] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" iface="eth0" netns="" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.403 [INFO][5457] k8s.go 615: Releasing IP address(es) ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.403 [INFO][5457] utils.go 188: Calico CNI releasing IP address ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.420 [INFO][5463] ipam_plugin.go 411: Releasing address using handleID ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.420 [INFO][5463] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.420 [INFO][5463] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.428 [WARNING][5463] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.428 [INFO][5463] ipam_plugin.go 439: Releasing address using workloadID ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" HandleID="k8s-pod-network.155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-coredns--7db6d8ff4d--8rbpz-eth0" Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.429 [INFO][5463] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:39.432711 containerd[1736]: 2024-07-02 00:25:39.431 [INFO][5457] k8s.go 621: Teardown processing complete. ContainerID="155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22" Jul 2 00:25:39.433141 containerd[1736]: time="2024-07-02T00:25:39.432751812Z" level=info msg="TearDown network for sandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\" successfully" Jul 2 00:25:39.441673 containerd[1736]: time="2024-07-02T00:25:39.441607466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:25:39.442007 containerd[1736]: time="2024-07-02T00:25:39.441683505Z" level=info msg="RemovePodSandbox \"155daa225422b3f8d5ddb4811540c7a9f6a6b585f0a96926afd58bc73f672e22\" returns successfully" Jul 2 00:25:53.600494 systemd[1]: run-containerd-runc-k8s.io-0847ab5f040a0a4ae1d005887a187ffcb559ab07c1adfd9fbf1517a45b3b7e1e-runc.QxoQI7.mount: Deactivated successfully. Jul 2 00:25:55.278990 update_engine[1684]: I0702 00:25:55.278941 1684 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 00:25:55.278990 update_engine[1684]: I0702 00:25:55.278985 1684 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 00:25:55.279346 update_engine[1684]: I0702 00:25:55.279224 1684 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 00:25:55.279944 update_engine[1684]: I0702 00:25:55.279918 1684 omaha_request_params.cc:62] Current group set to beta Jul 2 00:25:55.280060 update_engine[1684]: I0702 00:25:55.280028 1684 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 00:25:55.280060 update_engine[1684]: I0702 00:25:55.280045 1684 update_attempter.cc:643] Scheduling an action processor start. Jul 2 00:25:55.280060 update_engine[1684]: I0702 00:25:55.280061 1684 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:25:55.280131 update_engine[1684]: I0702 00:25:55.280090 1684 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 00:25:55.280153 update_engine[1684]: I0702 00:25:55.280143 1684 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:25:55.280153 update_engine[1684]: I0702 00:25:55.280148 1684 omaha_request_action.cc:272] Request: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280153 update_engine[1684]: Jul 2 00:25:55.280344 update_engine[1684]: I0702 00:25:55.280153 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:25:55.280851 locksmithd[1775]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 00:25:55.282051 update_engine[1684]: I0702 00:25:55.282022 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:25:55.282354 update_engine[1684]: I0702 00:25:55.282332 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:25:55.402397 update_engine[1684]: E0702 00:25:55.402344 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:25:55.402532 update_engine[1684]: I0702 00:25:55.402442 1684 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 00:26:05.074935 kubelet[3211]: I0702 00:26:05.074879 3211 topology_manager.go:215] "Topology Admit Handler" podUID="e813544a-13ee-459d-a864-ef48b5a60377" podNamespace="calico-apiserver" podName="calico-apiserver-5b6d49866d-qkvtq" Jul 2 00:26:05.083619 systemd[1]: Created slice kubepods-besteffort-pode813544a_13ee_459d_a864_ef48b5a60377.slice - libcontainer container kubepods-besteffort-pode813544a_13ee_459d_a864_ef48b5a60377.slice. Jul 2 00:26:05.110880 kubelet[3211]: I0702 00:26:05.110168 3211 topology_manager.go:215] "Topology Admit Handler" podUID="924e2230-aa7c-4cd5-b865-cce6023cdb6e" podNamespace="calico-apiserver" podName="calico-apiserver-5b6d49866d-nhzmg" Jul 2 00:26:05.121364 systemd[1]: Created slice kubepods-besteffort-pod924e2230_aa7c_4cd5_b865_cce6023cdb6e.slice - libcontainer container kubepods-besteffort-pod924e2230_aa7c_4cd5_b865_cce6023cdb6e.slice. Jul 2 00:26:05.236890 kubelet[3211]: I0702 00:26:05.236834 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggf72\" (UniqueName: \"kubernetes.io/projected/e813544a-13ee-459d-a864-ef48b5a60377-kube-api-access-ggf72\") pod \"calico-apiserver-5b6d49866d-qkvtq\" (UID: \"e813544a-13ee-459d-a864-ef48b5a60377\") " pod="calico-apiserver/calico-apiserver-5b6d49866d-qkvtq" Jul 2 00:26:05.236890 kubelet[3211]: I0702 00:26:05.236885 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch9s5\" (UniqueName: \"kubernetes.io/projected/924e2230-aa7c-4cd5-b865-cce6023cdb6e-kube-api-access-ch9s5\") pod \"calico-apiserver-5b6d49866d-nhzmg\" (UID: \"924e2230-aa7c-4cd5-b865-cce6023cdb6e\") " pod="calico-apiserver/calico-apiserver-5b6d49866d-nhzmg" Jul 2 00:26:05.237200 kubelet[3211]: I0702 00:26:05.236914 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/924e2230-aa7c-4cd5-b865-cce6023cdb6e-calico-apiserver-certs\") pod \"calico-apiserver-5b6d49866d-nhzmg\" (UID: \"924e2230-aa7c-4cd5-b865-cce6023cdb6e\") " pod="calico-apiserver/calico-apiserver-5b6d49866d-nhzmg" Jul 2 00:26:05.237200 kubelet[3211]: I0702 00:26:05.236944 3211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e813544a-13ee-459d-a864-ef48b5a60377-calico-apiserver-certs\") pod \"calico-apiserver-5b6d49866d-qkvtq\" (UID: \"e813544a-13ee-459d-a864-ef48b5a60377\") " pod="calico-apiserver/calico-apiserver-5b6d49866d-qkvtq" Jul 2 00:26:05.272043 update_engine[1684]: I0702 00:26:05.271620 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:05.272043 update_engine[1684]: I0702 00:26:05.271797 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:05.272043 update_engine[1684]: I0702 00:26:05.272006 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:05.338238 kubelet[3211]: E0702 00:26:05.337664 3211 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:26:05.338238 kubelet[3211]: E0702 00:26:05.337742 3211 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e813544a-13ee-459d-a864-ef48b5a60377-calico-apiserver-certs podName:e813544a-13ee-459d-a864-ef48b5a60377 nodeName:}" failed. No retries permitted until 2024-07-02 00:26:05.837724704 +0000 UTC m=+87.255737154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e813544a-13ee-459d-a864-ef48b5a60377-calico-apiserver-certs") pod "calico-apiserver-5b6d49866d-qkvtq" (UID: "e813544a-13ee-459d-a864-ef48b5a60377") : secret "calico-apiserver-certs" not found Jul 2 00:26:05.339406 kubelet[3211]: E0702 00:26:05.339274 3211 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:26:05.339406 kubelet[3211]: E0702 00:26:05.339365 3211 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/924e2230-aa7c-4cd5-b865-cce6023cdb6e-calico-apiserver-certs podName:924e2230-aa7c-4cd5-b865-cce6023cdb6e nodeName:}" failed. No retries permitted until 2024-07-02 00:26:05.839348539 +0000 UTC m=+87.257360989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/924e2230-aa7c-4cd5-b865-cce6023cdb6e-calico-apiserver-certs") pod "calico-apiserver-5b6d49866d-nhzmg" (UID: "924e2230-aa7c-4cd5-b865-cce6023cdb6e") : secret "calico-apiserver-certs" not found Jul 2 00:26:05.375197 update_engine[1684]: E0702 00:26:05.375101 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:05.375197 update_engine[1684]: I0702 00:26:05.375167 1684 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 00:26:05.840646 kubelet[3211]: E0702 00:26:05.840598 3211 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:26:05.840809 kubelet[3211]: E0702 00:26:05.840678 3211 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/924e2230-aa7c-4cd5-b865-cce6023cdb6e-calico-apiserver-certs podName:924e2230-aa7c-4cd5-b865-cce6023cdb6e nodeName:}" failed. No retries permitted until 2024-07-02 00:26:06.840657663 +0000 UTC m=+88.258670113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/924e2230-aa7c-4cd5-b865-cce6023cdb6e-calico-apiserver-certs") pod "calico-apiserver-5b6d49866d-nhzmg" (UID: "924e2230-aa7c-4cd5-b865-cce6023cdb6e") : secret "calico-apiserver-certs" not found Jul 2 00:26:05.842584 kubelet[3211]: E0702 00:26:05.841081 3211 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:26:05.842584 kubelet[3211]: E0702 00:26:05.841129 3211 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e813544a-13ee-459d-a864-ef48b5a60377-calico-apiserver-certs podName:e813544a-13ee-459d-a864-ef48b5a60377 nodeName:}" failed. No retries permitted until 2024-07-02 00:26:06.841118662 +0000 UTC m=+88.259131112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e813544a-13ee-459d-a864-ef48b5a60377-calico-apiserver-certs") pod "calico-apiserver-5b6d49866d-qkvtq" (UID: "e813544a-13ee-459d-a864-ef48b5a60377") : secret "calico-apiserver-certs" not found Jul 2 00:26:06.889456 containerd[1736]: time="2024-07-02T00:26:06.889352539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6d49866d-qkvtq,Uid:e813544a-13ee-459d-a864-ef48b5a60377,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:26:06.934133 containerd[1736]: time="2024-07-02T00:26:06.933920372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6d49866d-nhzmg,Uid:924e2230-aa7c-4cd5-b865-cce6023cdb6e,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:26:07.073370 systemd-networkd[1552]: cali52b449624f1: Link UP Jul 2 00:26:07.074267 systemd-networkd[1552]: cali52b449624f1: Gained carrier Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:06.960 [INFO][5573] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0 calico-apiserver-5b6d49866d- calico-apiserver e813544a-13ee-459d-a864-ef48b5a60377 902 0 2024-07-02 00:26:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b6d49866d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-3e8d94ffa6 calico-apiserver-5b6d49866d-qkvtq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali52b449624f1 [] []}} ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:06.960 [INFO][5573] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.003 [INFO][5585] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" HandleID="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.024 [INFO][5585] ipam_plugin.go 264: Auto assigning IP ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" HandleID="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ede00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-3e8d94ffa6", "pod":"calico-apiserver-5b6d49866d-qkvtq", "timestamp":"2024-07-02 00:26:07.003029654 +0000 UTC"}, Hostname:"ci-3975.1.1-a-3e8d94ffa6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.025 [INFO][5585] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.025 [INFO][5585] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.025 [INFO][5585] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-3e8d94ffa6' Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.026 [INFO][5585] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.038 [INFO][5585] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.044 [INFO][5585] ipam.go 489: Trying affinity for 192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.047 [INFO][5585] ipam.go 155: Attempting to load block cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.050 [INFO][5585] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.050 [INFO][5585] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.052 [INFO][5585] ipam.go 1685: Creating new handle: k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.058 [INFO][5585] ipam.go 1203: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.063 [INFO][5585] ipam.go 1216: Successfully claimed IPs: [192.168.89.69/26] block=192.168.89.64/26 handle="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.064 [INFO][5585] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.69/26] handle="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.064 [INFO][5585] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:26:07.099281 containerd[1736]: 2024-07-02 00:26:07.064 [INFO][5585] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.89.69/26] IPv6=[] ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" HandleID="k8s-pod-network.210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" Jul 2 00:26:07.100089 containerd[1736]: 2024-07-02 00:26:07.068 [INFO][5573] k8s.go 386: Populated endpoint ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0", GenerateName:"calico-apiserver-5b6d49866d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e813544a-13ee-459d-a864-ef48b5a60377", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6d49866d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"", Pod:"calico-apiserver-5b6d49866d-qkvtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52b449624f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:07.100089 containerd[1736]: 2024-07-02 00:26:07.068 [INFO][5573] k8s.go 387: Calico CNI using IPs: [192.168.89.69/32] ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" Jul 2 00:26:07.100089 containerd[1736]: 2024-07-02 00:26:07.068 [INFO][5573] dataplane_linux.go 68: Setting the host side veth name to cali52b449624f1 ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" Jul 2 00:26:07.100089 containerd[1736]: 2024-07-02 00:26:07.074 [INFO][5573] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" Jul 2 00:26:07.100089 containerd[1736]: 2024-07-02 00:26:07.075 [INFO][5573] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0", GenerateName:"calico-apiserver-5b6d49866d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e813544a-13ee-459d-a864-ef48b5a60377", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6d49866d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab", Pod:"calico-apiserver-5b6d49866d-qkvtq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali52b449624f1", MAC:"36:1d:81:e6:ca:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:07.100089 containerd[1736]: 2024-07-02 00:26:07.097 [INFO][5573] k8s.go 500: Wrote updated endpoint to datastore ContainerID="210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-qkvtq" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--qkvtq-eth0" Jul 2 00:26:07.142646 containerd[1736]: time="2024-07-02T00:26:07.140400420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:26:07.142646 containerd[1736]: time="2024-07-02T00:26:07.141204618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:07.142646 containerd[1736]: time="2024-07-02T00:26:07.141226498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:26:07.142646 containerd[1736]: time="2024-07-02T00:26:07.141237218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:07.170830 systemd[1]: Started cri-containerd-210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab.scope - libcontainer container 210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab. Jul 2 00:26:07.172023 systemd-networkd[1552]: calicb0a0754393: Link UP Jul 2 00:26:07.173058 systemd-networkd[1552]: calicb0a0754393: Gained carrier Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.035 [INFO][5590] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0 calico-apiserver-5b6d49866d- calico-apiserver 924e2230-aa7c-4cd5-b865-cce6023cdb6e 908 0 2024-07-02 00:26:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b6d49866d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.1.1-a-3e8d94ffa6 calico-apiserver-5b6d49866d-nhzmg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicb0a0754393 [] []}} ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.035 [INFO][5590] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.094 [INFO][5603] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" HandleID="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.112 [INFO][5603] ipam_plugin.go 264: Auto assigning IP ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" HandleID="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004f02c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.1.1-a-3e8d94ffa6", "pod":"calico-apiserver-5b6d49866d-nhzmg", "timestamp":"2024-07-02 00:26:07.094061193 +0000 UTC"}, Hostname:"ci-3975.1.1-a-3e8d94ffa6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.112 [INFO][5603] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.113 [INFO][5603] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.113 [INFO][5603] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.1.1-a-3e8d94ffa6' Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.114 [INFO][5603] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.122 [INFO][5603] ipam.go 372: Looking up existing affinities for host host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.130 [INFO][5603] ipam.go 489: Trying affinity for 192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.133 [INFO][5603] ipam.go 155: Attempting to load block cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.137 [INFO][5603] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.89.64/26 host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.138 [INFO][5603] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.89.64/26 handle="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.140 [INFO][5603] ipam.go 1685: Creating new handle: k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.150 [INFO][5603] ipam.go 1203: Writing block in order to claim IPs block=192.168.89.64/26 handle="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.157 [INFO][5603] ipam.go 1216: Successfully claimed IPs: [192.168.89.70/26] block=192.168.89.64/26 handle="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.158 [INFO][5603] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.89.70/26] handle="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" host="ci-3975.1.1-a-3e8d94ffa6" Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.158 [INFO][5603] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:26:07.199144 containerd[1736]: 2024-07-02 00:26:07.158 [INFO][5603] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.89.70/26] IPv6=[] ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" HandleID="k8s-pod-network.2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Workload="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" Jul 2 00:26:07.200707 containerd[1736]: 2024-07-02 00:26:07.160 [INFO][5590] k8s.go 386: Populated endpoint ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0", GenerateName:"calico-apiserver-5b6d49866d-", Namespace:"calico-apiserver", SelfLink:"", UID:"924e2230-aa7c-4cd5-b865-cce6023cdb6e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6d49866d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"", Pod:"calico-apiserver-5b6d49866d-nhzmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb0a0754393", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:07.200707 containerd[1736]: 2024-07-02 00:26:07.160 [INFO][5590] k8s.go 387: Calico CNI using IPs: [192.168.89.70/32] ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" Jul 2 00:26:07.200707 containerd[1736]: 2024-07-02 00:26:07.160 [INFO][5590] dataplane_linux.go 68: Setting the host side veth name to calicb0a0754393 ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" Jul 2 00:26:07.200707 containerd[1736]: 2024-07-02 00:26:07.173 [INFO][5590] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" Jul 2 00:26:07.200707 containerd[1736]: 2024-07-02 00:26:07.176 [INFO][5590] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0", GenerateName:"calico-apiserver-5b6d49866d-", Namespace:"calico-apiserver", SelfLink:"", UID:"924e2230-aa7c-4cd5-b865-cce6023cdb6e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 26, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b6d49866d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.1.1-a-3e8d94ffa6", ContainerID:"2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d", Pod:"calico-apiserver-5b6d49866d-nhzmg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb0a0754393", MAC:"d6:ca:2f:33:aa:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:26:07.200707 containerd[1736]: 2024-07-02 00:26:07.195 [INFO][5590] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d" Namespace="calico-apiserver" Pod="calico-apiserver-5b6d49866d-nhzmg" WorkloadEndpoint="ci--3975.1.1--a--3e8d94ffa6-k8s-calico--apiserver--5b6d49866d--nhzmg-eth0" Jul 2 00:26:07.240045 containerd[1736]: time="2024-07-02T00:26:07.239939895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:26:07.240371 containerd[1736]: time="2024-07-02T00:26:07.240195094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:07.240371 containerd[1736]: time="2024-07-02T00:26:07.240230414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:26:07.240486 containerd[1736]: time="2024-07-02T00:26:07.240255134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:26:07.253911 containerd[1736]: time="2024-07-02T00:26:07.253748576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6d49866d-qkvtq,Uid:e813544a-13ee-459d-a864-ef48b5a60377,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab\"" Jul 2 00:26:07.258011 containerd[1736]: time="2024-07-02T00:26:07.256879447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:26:07.261953 systemd[1]: Started cri-containerd-2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d.scope - libcontainer container 2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d. Jul 2 00:26:07.303913 containerd[1736]: time="2024-07-02T00:26:07.303854352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b6d49866d-nhzmg,Uid:924e2230-aa7c-4cd5-b865-cce6023cdb6e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d\"" Jul 2 00:26:08.196762 systemd-networkd[1552]: cali52b449624f1: Gained IPv6LL Jul 2 00:26:08.324804 systemd-networkd[1552]: calicb0a0754393: Gained IPv6LL Jul 2 00:26:09.679917 containerd[1736]: time="2024-07-02T00:26:09.679850426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:09.681911 containerd[1736]: time="2024-07-02T00:26:09.681794780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jul 2 00:26:09.685321 containerd[1736]: time="2024-07-02T00:26:09.685252091Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:09.689834 containerd[1736]: time="2024-07-02T00:26:09.689765318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:09.691004 containerd[1736]: time="2024-07-02T00:26:09.690484636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.43346231s" Jul 2 00:26:09.691004 containerd[1736]: time="2024-07-02T00:26:09.690823555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 00:26:09.694921 containerd[1736]: time="2024-07-02T00:26:09.694655944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:26:09.696594 containerd[1736]: time="2024-07-02T00:26:09.696532258Z" level=info msg="CreateContainer within sandbox \"210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:26:09.727953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1430759144.mount: Deactivated successfully. Jul 2 00:26:09.734190 containerd[1736]: time="2024-07-02T00:26:09.734125191Z" level=info msg="CreateContainer within sandbox \"210e4b3cac8ef920f88a6c439e687f87166eabb5acf9afdd570c916f299892ab\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a784c3f2345af68c6e2d2defce0d6d0da4ed1d81c9d7d52cfe899344e16440f6\"" Jul 2 00:26:09.736077 containerd[1736]: time="2024-07-02T00:26:09.735640746Z" level=info msg="StartContainer for \"a784c3f2345af68c6e2d2defce0d6d0da4ed1d81c9d7d52cfe899344e16440f6\"" Jul 2 00:26:09.771903 systemd[1]: Started cri-containerd-a784c3f2345af68c6e2d2defce0d6d0da4ed1d81c9d7d52cfe899344e16440f6.scope - libcontainer container a784c3f2345af68c6e2d2defce0d6d0da4ed1d81c9d7d52cfe899344e16440f6. Jul 2 00:26:09.807632 containerd[1736]: time="2024-07-02T00:26:09.807536700Z" level=info msg="StartContainer for \"a784c3f2345af68c6e2d2defce0d6d0da4ed1d81c9d7d52cfe899344e16440f6\" returns successfully" Jul 2 00:26:10.005814 kubelet[3211]: I0702 00:26:10.001990 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b6d49866d-qkvtq" podStartSLOduration=2.565296723 podStartE2EDuration="5.001966303s" podCreationTimestamp="2024-07-02 00:26:05 +0000 UTC" firstStartedPulling="2024-07-02 00:26:07.256338488 +0000 UTC m=+88.674350898" lastFinishedPulling="2024-07-02 00:26:09.693008028 +0000 UTC m=+91.111020478" observedRunningTime="2024-07-02 00:26:10.001824264 +0000 UTC m=+91.419836714" watchObservedRunningTime="2024-07-02 00:26:10.001966303 +0000 UTC m=+91.419978753" Jul 2 00:26:10.050906 containerd[1736]: time="2024-07-02T00:26:10.050833043Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:26:10.053249 containerd[1736]: time="2024-07-02T00:26:10.053188797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Jul 2 00:26:10.055367 containerd[1736]: time="2024-07-02T00:26:10.055320631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 360.608607ms" Jul 2 00:26:10.055367 containerd[1736]: time="2024-07-02T00:26:10.055365830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jul 2 00:26:10.057994 containerd[1736]: time="2024-07-02T00:26:10.057643584Z" level=info msg="CreateContainer within sandbox \"2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:26:10.118607 containerd[1736]: time="2024-07-02T00:26:10.118435970Z" level=info msg="CreateContainer within sandbox \"2cf874edcfe60322a91c70b74e33e2aabc5c5bb347e6c621efb8430479cc4e1d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50dcf7c9f398546a78e385ea95db3a5fbb4761c9bcf669a80fb2f62cc7fc9022\"" Jul 2 00:26:10.120178 containerd[1736]: time="2024-07-02T00:26:10.119211208Z" level=info msg="StartContainer for \"50dcf7c9f398546a78e385ea95db3a5fbb4761c9bcf669a80fb2f62cc7fc9022\"" Jul 2 00:26:10.148812 systemd[1]: Started cri-containerd-50dcf7c9f398546a78e385ea95db3a5fbb4761c9bcf669a80fb2f62cc7fc9022.scope - libcontainer container 50dcf7c9f398546a78e385ea95db3a5fbb4761c9bcf669a80fb2f62cc7fc9022. Jul 2 00:26:10.203200 containerd[1736]: time="2024-07-02T00:26:10.202971048Z" level=info msg="StartContainer for \"50dcf7c9f398546a78e385ea95db3a5fbb4761c9bcf669a80fb2f62cc7fc9022\" returns successfully" Jul 2 00:26:12.020420 kubelet[3211]: I0702 00:26:12.020251 3211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5b6d49866d-nhzmg" podStartSLOduration=4.270233399 podStartE2EDuration="7.020227802s" podCreationTimestamp="2024-07-02 00:26:05 +0000 UTC" firstStartedPulling="2024-07-02 00:26:07.306140585 +0000 UTC m=+88.724152995" lastFinishedPulling="2024-07-02 00:26:10.056134948 +0000 UTC m=+91.474147398" observedRunningTime="2024-07-02 00:26:11.014118164 +0000 UTC m=+92.432130614" watchObservedRunningTime="2024-07-02 00:26:12.020227802 +0000 UTC m=+93.438240212" Jul 2 00:26:15.267034 update_engine[1684]: I0702 00:26:15.266592 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:15.267034 update_engine[1684]: I0702 00:26:15.266781 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:15.267034 update_engine[1684]: I0702 00:26:15.266996 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:15.271143 update_engine[1684]: E0702 00:26:15.271062 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:15.271143 update_engine[1684]: I0702 00:26:15.271118 1684 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 00:26:25.272737 update_engine[1684]: I0702 00:26:25.272156 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:25.272737 update_engine[1684]: I0702 00:26:25.272348 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:25.272737 update_engine[1684]: I0702 00:26:25.272630 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:25.307402 update_engine[1684]: E0702 00:26:25.307358 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:25.307552 update_engine[1684]: I0702 00:26:25.307427 1684 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:26:25.307552 update_engine[1684]: I0702 00:26:25.307433 1684 omaha_request_action.cc:617] Omaha request response: Jul 2 00:26:25.307552 update_engine[1684]: E0702 00:26:25.307519 1684 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 00:26:25.307552 update_engine[1684]: I0702 00:26:25.307533 1684 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 00:26:25.307552 update_engine[1684]: I0702 00:26:25.307538 1684 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:26:25.307552 update_engine[1684]: I0702 00:26:25.307540 1684 update_attempter.cc:306] Processing Done. Jul 2 00:26:25.307715 update_engine[1684]: E0702 00:26:25.307574 1684 update_attempter.cc:619] Update failed. Jul 2 00:26:25.307715 update_engine[1684]: I0702 00:26:25.307579 1684 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 00:26:25.307715 update_engine[1684]: I0702 00:26:25.307581 1684 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 00:26:25.307715 update_engine[1684]: I0702 00:26:25.307585 1684 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 00:26:25.307715 update_engine[1684]: I0702 00:26:25.307653 1684 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 00:26:25.307715 update_engine[1684]: I0702 00:26:25.307673 1684 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 00:26:25.307715 update_engine[1684]: I0702 00:26:25.307677 1684 omaha_request_action.cc:272] Request: Jul 2 00:26:25.307715 update_engine[1684]: Jul 2 00:26:25.307715 update_engine[1684]: Jul 2 00:26:25.307715 update_engine[1684]: Jul 2 00:26:25.307715 update_engine[1684]: Jul 2 00:26:25.307715 update_engine[1684]: Jul 2 00:26:25.307715 update_engine[1684]: Jul 2 00:26:25.307715 update_engine[1684]: I0702 00:26:25.307680 1684 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 00:26:25.307969 update_engine[1684]: I0702 00:26:25.307806 1684 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 00:26:25.308125 update_engine[1684]: I0702 00:26:25.308017 1684 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 00:26:25.308279 locksmithd[1775]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 00:26:25.346113 update_engine[1684]: E0702 00:26:25.346071 1684 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 00:26:25.346393 update_engine[1684]: I0702 00:26:25.346138 1684 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 00:26:25.346393 update_engine[1684]: I0702 00:26:25.346143 1684 omaha_request_action.cc:617] Omaha request response: Jul 2 00:26:25.346393 update_engine[1684]: I0702 00:26:25.346146 1684 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:26:25.346393 update_engine[1684]: I0702 00:26:25.346149 1684 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 00:26:25.346393 update_engine[1684]: I0702 00:26:25.346151 1684 update_attempter.cc:306] Processing Done. Jul 2 00:26:25.346393 update_engine[1684]: I0702 00:26:25.346155 1684 update_attempter.cc:310] Error event sent. Jul 2 00:26:25.346393 update_engine[1684]: I0702 00:26:25.346164 1684 update_check_scheduler.cc:74] Next update check in 41m56s Jul 2 00:26:25.346548 locksmithd[1775]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 00:26:35.698846 systemd[1]: Started sshd@7-10.200.20.12:22-10.200.16.10:54742.service - OpenSSH per-connection server daemon (10.200.16.10:54742). Jul 2 00:26:36.179121 sshd[5886]: Accepted publickey for core from 10.200.16.10 port 54742 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:26:36.181022 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:36.185157 systemd-logind[1680]: New session 10 of user core. Jul 2 00:26:36.192734 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 00:26:36.608225 sshd[5886]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:36.611169 systemd[1]: sshd@7-10.200.20.12:22-10.200.16.10:54742.service: Deactivated successfully. Jul 2 00:26:36.613513 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:26:36.615128 systemd-logind[1680]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:26:36.616529 systemd-logind[1680]: Removed session 10. Jul 2 00:26:41.702930 systemd[1]: Started sshd@8-10.200.20.12:22-10.200.16.10:36428.service - OpenSSH per-connection server daemon (10.200.16.10:36428). Jul 2 00:26:42.142186 sshd[5902]: Accepted publickey for core from 10.200.16.10 port 36428 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:26:42.143680 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:42.148085 systemd-logind[1680]: New session 11 of user core. Jul 2 00:26:42.153734 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 00:26:42.539161 sshd[5902]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:42.543058 systemd[1]: sshd@8-10.200.20.12:22-10.200.16.10:36428.service: Deactivated successfully. Jul 2 00:26:42.546070 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:26:42.547496 systemd-logind[1680]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:26:42.548759 systemd-logind[1680]: Removed session 11. Jul 2 00:26:47.631896 systemd[1]: Started sshd@9-10.200.20.12:22-10.200.16.10:36438.service - OpenSSH per-connection server daemon (10.200.16.10:36438). Jul 2 00:26:48.102502 sshd[5941]: Accepted publickey for core from 10.200.16.10 port 36438 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:26:48.103874 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:48.108879 systemd-logind[1680]: New session 12 of user core. Jul 2 00:26:48.117757 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 00:26:48.520370 sshd[5941]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:48.524838 systemd[1]: sshd@9-10.200.20.12:22-10.200.16.10:36438.service: Deactivated successfully. Jul 2 00:26:48.527531 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:26:48.528766 systemd-logind[1680]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:26:48.530379 systemd-logind[1680]: Removed session 12. Jul 2 00:26:53.610138 systemd[1]: Started sshd@10-10.200.20.12:22-10.200.16.10:39210.service - OpenSSH per-connection server daemon (10.200.16.10:39210). Jul 2 00:26:54.059187 sshd[5977]: Accepted publickey for core from 10.200.16.10 port 39210 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:26:54.060820 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:54.065272 systemd-logind[1680]: New session 13 of user core. Jul 2 00:26:54.075778 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 00:26:54.458716 sshd[5977]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:54.463060 systemd[1]: sshd@10-10.200.20.12:22-10.200.16.10:39210.service: Deactivated successfully. Jul 2 00:26:54.468172 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:26:54.469688 systemd-logind[1680]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:26:54.471477 systemd-logind[1680]: Removed session 13. Jul 2 00:26:54.546423 systemd[1]: Started sshd@11-10.200.20.12:22-10.200.16.10:39214.service - OpenSSH per-connection server daemon (10.200.16.10:39214). Jul 2 00:26:54.997269 sshd[5999]: Accepted publickey for core from 10.200.16.10 port 39214 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:26:54.998899 sshd[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:55.003468 systemd-logind[1680]: New session 14 of user core. Jul 2 00:26:55.011888 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 00:26:55.429817 sshd[5999]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:55.432676 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:26:55.433644 systemd[1]: sshd@11-10.200.20.12:22-10.200.16.10:39214.service: Deactivated successfully. Jul 2 00:26:55.437444 systemd-logind[1680]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:26:55.438501 systemd-logind[1680]: Removed session 14. Jul 2 00:26:55.515004 systemd[1]: Started sshd@12-10.200.20.12:22-10.200.16.10:39216.service - OpenSSH per-connection server daemon (10.200.16.10:39216). Jul 2 00:26:55.959768 sshd[6012]: Accepted publickey for core from 10.200.16.10 port 39216 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:26:55.961584 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:26:55.966517 systemd-logind[1680]: New session 15 of user core. Jul 2 00:26:55.969753 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 00:26:56.353220 sshd[6012]: pam_unix(sshd:session): session closed for user core Jul 2 00:26:56.356223 systemd[1]: sshd@12-10.200.20.12:22-10.200.16.10:39216.service: Deactivated successfully. Jul 2 00:26:56.359047 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:26:56.361528 systemd-logind[1680]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:26:56.362503 systemd-logind[1680]: Removed session 15. Jul 2 00:27:01.443921 systemd[1]: Started sshd@13-10.200.20.12:22-10.200.16.10:40842.service - OpenSSH per-connection server daemon (10.200.16.10:40842). Jul 2 00:27:01.917608 sshd[6029]: Accepted publickey for core from 10.200.16.10 port 40842 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:01.918992 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:01.923777 systemd-logind[1680]: New session 16 of user core. Jul 2 00:27:01.929745 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 00:27:02.334705 sshd[6029]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:02.338365 systemd[1]: sshd@13-10.200.20.12:22-10.200.16.10:40842.service: Deactivated successfully. Jul 2 00:27:02.340614 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:27:02.341405 systemd-logind[1680]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:27:02.342478 systemd-logind[1680]: Removed session 16. Jul 2 00:27:07.418849 systemd[1]: Started sshd@14-10.200.20.12:22-10.200.16.10:40848.service - OpenSSH per-connection server daemon (10.200.16.10:40848). Jul 2 00:27:07.859223 sshd[6069]: Accepted publickey for core from 10.200.16.10 port 40848 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:07.860680 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:07.864804 systemd-logind[1680]: New session 17 of user core. Jul 2 00:27:07.871876 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 00:27:08.257308 sshd[6069]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:08.260663 systemd[1]: sshd@14-10.200.20.12:22-10.200.16.10:40848.service: Deactivated successfully. Jul 2 00:27:08.262906 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:27:08.264216 systemd-logind[1680]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:27:08.265597 systemd-logind[1680]: Removed session 17. Jul 2 00:27:13.344810 systemd[1]: Started sshd@15-10.200.20.12:22-10.200.16.10:44234.service - OpenSSH per-connection server daemon (10.200.16.10:44234). Jul 2 00:27:13.821449 sshd[6094]: Accepted publickey for core from 10.200.16.10 port 44234 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:13.822908 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:13.829139 systemd-logind[1680]: New session 18 of user core. Jul 2 00:27:13.833781 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 00:27:14.238500 sshd[6094]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:14.242191 systemd[1]: sshd@15-10.200.20.12:22-10.200.16.10:44234.service: Deactivated successfully. Jul 2 00:27:14.244140 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:27:14.245616 systemd-logind[1680]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:27:14.246471 systemd-logind[1680]: Removed session 18. Jul 2 00:27:19.323864 systemd[1]: Started sshd@16-10.200.20.12:22-10.200.16.10:48954.service - OpenSSH per-connection server daemon (10.200.16.10:48954). Jul 2 00:27:19.763132 sshd[6131]: Accepted publickey for core from 10.200.16.10 port 48954 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:19.764495 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:19.768259 systemd-logind[1680]: New session 19 of user core. Jul 2 00:27:19.779735 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 00:27:20.158124 sshd[6131]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:20.161975 systemd[1]: sshd@16-10.200.20.12:22-10.200.16.10:48954.service: Deactivated successfully. Jul 2 00:27:20.164026 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:27:20.164956 systemd-logind[1680]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:27:20.165840 systemd-logind[1680]: Removed session 19. Jul 2 00:27:20.250843 systemd[1]: Started sshd@17-10.200.20.12:22-10.200.16.10:48964.service - OpenSSH per-connection server daemon (10.200.16.10:48964). Jul 2 00:27:20.721024 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 48964 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:20.722392 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:20.726753 systemd-logind[1680]: New session 20 of user core. Jul 2 00:27:20.728739 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 00:27:21.226700 sshd[6143]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:21.230530 systemd[1]: sshd@17-10.200.20.12:22-10.200.16.10:48964.service: Deactivated successfully. Jul 2 00:27:21.232796 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:27:21.233787 systemd-logind[1680]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:27:21.235079 systemd-logind[1680]: Removed session 20. Jul 2 00:27:21.316886 systemd[1]: Started sshd@18-10.200.20.12:22-10.200.16.10:48980.service - OpenSSH per-connection server daemon (10.200.16.10:48980). Jul 2 00:27:21.788791 sshd[6153]: Accepted publickey for core from 10.200.16.10 port 48980 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:21.790272 sshd[6153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:21.795649 systemd-logind[1680]: New session 21 of user core. Jul 2 00:27:21.798720 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 00:27:23.680919 sshd[6153]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:23.684664 systemd-logind[1680]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:27:23.685295 systemd[1]: sshd@18-10.200.20.12:22-10.200.16.10:48980.service: Deactivated successfully. Jul 2 00:27:23.687839 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:27:23.689233 systemd-logind[1680]: Removed session 21. Jul 2 00:27:23.765845 systemd[1]: Started sshd@19-10.200.20.12:22-10.200.16.10:48984.service - OpenSSH per-connection server daemon (10.200.16.10:48984). Jul 2 00:27:24.206017 sshd[6173]: Accepted publickey for core from 10.200.16.10 port 48984 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:24.207483 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:24.213097 systemd-logind[1680]: New session 22 of user core. Jul 2 00:27:24.218741 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 00:27:24.714473 sshd[6173]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:24.719088 systemd[1]: sshd@19-10.200.20.12:22-10.200.16.10:48984.service: Deactivated successfully. Jul 2 00:27:24.721786 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:27:24.723266 systemd-logind[1680]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:27:24.724779 systemd-logind[1680]: Removed session 22. Jul 2 00:27:24.804915 systemd[1]: Started sshd@20-10.200.20.12:22-10.200.16.10:48996.service - OpenSSH per-connection server daemon (10.200.16.10:48996). Jul 2 00:27:25.275660 sshd[6186]: Accepted publickey for core from 10.200.16.10 port 48996 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:25.277020 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:25.283115 systemd-logind[1680]: New session 23 of user core. Jul 2 00:27:25.288791 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 00:27:25.692755 sshd[6186]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:25.695808 systemd-logind[1680]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:27:25.695980 systemd[1]: sshd@20-10.200.20.12:22-10.200.16.10:48996.service: Deactivated successfully. Jul 2 00:27:25.698739 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:27:25.700954 systemd-logind[1680]: Removed session 23. Jul 2 00:27:30.776836 systemd[1]: Started sshd@21-10.200.20.12:22-10.200.16.10:56068.service - OpenSSH per-connection server daemon (10.200.16.10:56068). Jul 2 00:27:31.215081 sshd[6206]: Accepted publickey for core from 10.200.16.10 port 56068 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:31.216459 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:31.220517 systemd-logind[1680]: New session 24 of user core. Jul 2 00:27:31.228777 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 00:27:31.610179 sshd[6206]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:31.613986 systemd[1]: sshd@21-10.200.20.12:22-10.200.16.10:56068.service: Deactivated successfully. Jul 2 00:27:31.615775 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:27:31.616417 systemd-logind[1680]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:27:31.617727 systemd-logind[1680]: Removed session 24. Jul 2 00:27:36.695944 systemd[1]: Started sshd@22-10.200.20.12:22-10.200.16.10:56070.service - OpenSSH per-connection server daemon (10.200.16.10:56070). Jul 2 00:27:37.135095 sshd[6248]: Accepted publickey for core from 10.200.16.10 port 56070 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:37.136519 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:37.141162 systemd-logind[1680]: New session 25 of user core. Jul 2 00:27:37.145837 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 00:27:37.530207 sshd[6248]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:37.534139 systemd[1]: sshd@22-10.200.20.12:22-10.200.16.10:56070.service: Deactivated successfully. Jul 2 00:27:37.536596 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:27:37.539528 systemd-logind[1680]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:27:37.540722 systemd-logind[1680]: Removed session 25. Jul 2 00:27:42.616756 systemd[1]: Started sshd@23-10.200.20.12:22-10.200.16.10:38358.service - OpenSSH per-connection server daemon (10.200.16.10:38358). Jul 2 00:27:43.099275 sshd[6264]: Accepted publickey for core from 10.200.16.10 port 38358 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:43.100666 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:43.104757 systemd-logind[1680]: New session 26 of user core. Jul 2 00:27:43.109712 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 00:27:43.517471 sshd[6264]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:43.520765 systemd-logind[1680]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:27:43.521333 systemd[1]: sshd@23-10.200.20.12:22-10.200.16.10:38358.service: Deactivated successfully. Jul 2 00:27:43.524145 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:27:43.525426 systemd-logind[1680]: Removed session 26. Jul 2 00:27:48.616880 systemd[1]: Started sshd@24-10.200.20.12:22-10.200.16.10:36612.service - OpenSSH per-connection server daemon (10.200.16.10:36612). Jul 2 00:27:49.088813 sshd[6301]: Accepted publickey for core from 10.200.16.10 port 36612 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:49.090193 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:49.093989 systemd-logind[1680]: New session 27 of user core. Jul 2 00:27:49.098737 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 00:27:49.503445 sshd[6301]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:49.507135 systemd[1]: sshd@24-10.200.20.12:22-10.200.16.10:36612.service: Deactivated successfully. Jul 2 00:27:49.509049 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:27:49.509980 systemd-logind[1680]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:27:49.511395 systemd-logind[1680]: Removed session 27. Jul 2 00:27:54.590581 systemd[1]: Started sshd@25-10.200.20.12:22-10.200.16.10:36622.service - OpenSSH per-connection server daemon (10.200.16.10:36622). Jul 2 00:27:55.071305 sshd[6332]: Accepted publickey for core from 10.200.16.10 port 36622 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:27:55.072726 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:27:55.076794 systemd-logind[1680]: New session 28 of user core. Jul 2 00:27:55.082744 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 2 00:27:55.486537 sshd[6332]: pam_unix(sshd:session): session closed for user core Jul 2 00:27:55.490021 systemd[1]: sshd@25-10.200.20.12:22-10.200.16.10:36622.service: Deactivated successfully. Jul 2 00:27:55.491936 systemd[1]: session-28.scope: Deactivated successfully. Jul 2 00:27:55.493426 systemd-logind[1680]: Session 28 logged out. Waiting for processes to exit. Jul 2 00:27:55.494340 systemd-logind[1680]: Removed session 28. Jul 2 00:28:00.577881 systemd[1]: Started sshd@26-10.200.20.12:22-10.200.16.10:35006.service - OpenSSH per-connection server daemon (10.200.16.10:35006). Jul 2 00:28:01.056697 sshd[6353]: Accepted publickey for core from 10.200.16.10 port 35006 ssh2: RSA SHA256:Oj9dhUKKAkNJxnTlD31/rDW0GbgMnNaiKfDl/682rGY Jul 2 00:28:01.058100 sshd[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:28:01.062302 systemd-logind[1680]: New session 29 of user core. Jul 2 00:28:01.069721 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 2 00:28:01.474855 sshd[6353]: pam_unix(sshd:session): session closed for user core Jul 2 00:28:01.478420 systemd[1]: sshd@26-10.200.20.12:22-10.200.16.10:35006.service: Deactivated successfully. Jul 2 00:28:01.480442 systemd[1]: session-29.scope: Deactivated successfully. Jul 2 00:28:01.481478 systemd-logind[1680]: Session 29 logged out. Waiting for processes to exit. Jul 2 00:28:01.482580 systemd-logind[1680]: Removed session 29. Jul 2 00:28:57.713692 systemd[1]: cri-containerd-d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d.scope: Deactivated successfully. Jul 2 00:28:57.714379 systemd[1]: cri-containerd-d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d.scope: Consumed 6.792s CPU time. Jul 2 00:28:57.735576 containerd[1736]: time="2024-07-02T00:28:57.735260371Z" level=info msg="shim disconnected" id=d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d namespace=k8s.io Jul 2 00:28:57.735576 containerd[1736]: time="2024-07-02T00:28:57.735321011Z" level=warning msg="cleaning up after shim disconnected" id=d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d namespace=k8s.io Jul 2 00:28:57.735576 containerd[1736]: time="2024-07-02T00:28:57.735329851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:28:57.736531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d-rootfs.mount: Deactivated successfully. Jul 2 00:28:57.747989 containerd[1736]: time="2024-07-02T00:28:57.747932453Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:28:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:28:58.294310 systemd[1]: cri-containerd-fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38.scope: Deactivated successfully. Jul 2 00:28:58.296041 systemd[1]: cri-containerd-fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38.scope: Consumed 3.326s CPU time, 22.1M memory peak, 0B memory swap peak. Jul 2 00:28:58.317338 containerd[1736]: time="2024-07-02T00:28:58.317280590Z" level=info msg="shim disconnected" id=fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38 namespace=k8s.io Jul 2 00:28:58.317551 containerd[1736]: time="2024-07-02T00:28:58.317532030Z" level=warning msg="cleaning up after shim disconnected" id=fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38 namespace=k8s.io Jul 2 00:28:58.317729 containerd[1736]: time="2024-07-02T00:28:58.317663269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:28:58.318369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38-rootfs.mount: Deactivated successfully. Jul 2 00:28:58.329389 kubelet[3211]: I0702 00:28:58.328603 3211 scope.go:117] "RemoveContainer" containerID="d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d" Jul 2 00:28:58.332009 containerd[1736]: time="2024-07-02T00:28:58.331710387Z" level=info msg="CreateContainer within sandbox \"b6cd7a7f5bf4031dd6f3ad0dfbe41bc198f3e5edcd6ca76d0fb86cff188fdced\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 00:28:58.372214 containerd[1736]: time="2024-07-02T00:28:58.372102506Z" level=info msg="CreateContainer within sandbox \"b6cd7a7f5bf4031dd6f3ad0dfbe41bc198f3e5edcd6ca76d0fb86cff188fdced\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be\"" Jul 2 00:28:58.372833 containerd[1736]: time="2024-07-02T00:28:58.372736065Z" level=info msg="StartContainer for \"7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be\"" Jul 2 00:28:58.396770 systemd[1]: Started cri-containerd-7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be.scope - libcontainer container 7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be. Jul 2 00:28:58.424598 containerd[1736]: time="2024-07-02T00:28:58.424535910Z" level=info msg="StartContainer for \"7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be\" returns successfully" Jul 2 00:28:59.333969 kubelet[3211]: I0702 00:28:59.333936 3211 scope.go:117] "RemoveContainer" containerID="fa1f97371d62be2762bf739ee75cc301e8de33c9affbb976e6db228d9cebca38" Jul 2 00:28:59.336274 containerd[1736]: time="2024-07-02T00:28:59.336233262Z" level=info msg="CreateContainer within sandbox \"10bc95d076f644e97ad3bdfc31d90d8f80a71489b9ba69be81d7acef2bced655\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:28:59.372855 containerd[1736]: time="2024-07-02T00:28:59.372791233Z" level=info msg="CreateContainer within sandbox \"10bc95d076f644e97ad3bdfc31d90d8f80a71489b9ba69be81d7acef2bced655\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b047a051e71fc9db594490ec864f46165dceda21637d55d6016c3ddb607ccdab\"" Jul 2 00:28:59.373591 containerd[1736]: time="2024-07-02T00:28:59.373292592Z" level=info msg="StartContainer for \"b047a051e71fc9db594490ec864f46165dceda21637d55d6016c3ddb607ccdab\"" Jul 2 00:28:59.410737 systemd[1]: Started cri-containerd-b047a051e71fc9db594490ec864f46165dceda21637d55d6016c3ddb607ccdab.scope - libcontainer container b047a051e71fc9db594490ec864f46165dceda21637d55d6016c3ddb607ccdab. Jul 2 00:28:59.451852 containerd[1736]: time="2024-07-02T00:28:59.451801397Z" level=info msg="StartContainer for \"b047a051e71fc9db594490ec864f46165dceda21637d55d6016c3ddb607ccdab\" returns successfully" Jul 2 00:29:00.956542 kubelet[3211]: E0702 00:29:00.956361 3211 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout" event="&Event{ObjectMeta:{kube-apiserver-ci-3975.1.1-a-3e8d94ffa6.17de3dde4032ff2e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3975.1.1-a-3e8d94ffa6,UID:3dab7bb20b9b779535ed246568dbcd47,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3975.1.1-a-3e8d94ffa6,},FirstTimestamp:2024-07-02 00:28:52.471226158 +0000 UTC m=+253.889238608,LastTimestamp:2024-07-02 00:28:52.471226158 +0000 UTC m=+253.889238608,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3975.1.1-a-3e8d94ffa6,}" Jul 2 00:29:01.830248 kubelet[3211]: E0702 00:29:01.830204 3211 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.12:45768->10.200.20.11:2379: read: connection timed out" Jul 2 00:29:01.837272 systemd[1]: cri-containerd-3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd.scope: Deactivated successfully. Jul 2 00:29:01.837530 systemd[1]: cri-containerd-3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd.scope: Consumed 2.429s CPU time, 18.4M memory peak, 0B memory swap peak. Jul 2 00:29:01.862310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd-rootfs.mount: Deactivated successfully. Jul 2 00:29:01.863053 containerd[1736]: time="2024-07-02T00:29:01.862947777Z" level=info msg="shim disconnected" id=3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd namespace=k8s.io Jul 2 00:29:01.863053 containerd[1736]: time="2024-07-02T00:29:01.863009817Z" level=warning msg="cleaning up after shim disconnected" id=3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd namespace=k8s.io Jul 2 00:29:01.863053 containerd[1736]: time="2024-07-02T00:29:01.863019257Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:02.344735 kubelet[3211]: I0702 00:29:02.344697 3211 scope.go:117] "RemoveContainer" containerID="3f37cc25e8cc7f983d69cbe30e34a08c46d38ad64b6a083b9aee0f7963ce14cd" Jul 2 00:29:02.347582 containerd[1736]: time="2024-07-02T00:29:02.347461125Z" level=info msg="CreateContainer within sandbox \"8a1a576fa8ccaffeae75290a8dd8d21cf79b0964c51cbce800d1f6175ed8a677\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:29:02.395874 containerd[1736]: time="2024-07-02T00:29:02.395764501Z" level=info msg="CreateContainer within sandbox \"8a1a576fa8ccaffeae75290a8dd8d21cf79b0964c51cbce800d1f6175ed8a677\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ba5ec5f9b620e2edd5304411f35f6b63b31bfae846b28b2144ba1294c3dd223f\"" Jul 2 00:29:02.396726 containerd[1736]: time="2024-07-02T00:29:02.396477418Z" level=info msg="StartContainer for \"ba5ec5f9b620e2edd5304411f35f6b63b31bfae846b28b2144ba1294c3dd223f\"" Jul 2 00:29:02.423842 systemd[1]: Started cri-containerd-ba5ec5f9b620e2edd5304411f35f6b63b31bfae846b28b2144ba1294c3dd223f.scope - libcontainer container ba5ec5f9b620e2edd5304411f35f6b63b31bfae846b28b2144ba1294c3dd223f. Jul 2 00:29:02.461003 containerd[1736]: time="2024-07-02T00:29:02.460921345Z" level=info msg="StartContainer for \"ba5ec5f9b620e2edd5304411f35f6b63b31bfae846b28b2144ba1294c3dd223f\" returns successfully" Jul 2 00:29:08.745182 systemd[1]: cri-containerd-7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be.scope: Deactivated successfully. Jul 2 00:29:08.766987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be-rootfs.mount: Deactivated successfully. Jul 2 00:29:08.784332 containerd[1736]: time="2024-07-02T00:29:08.784237801Z" level=info msg="shim disconnected" id=7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be namespace=k8s.io Jul 2 00:29:08.784332 containerd[1736]: time="2024-07-02T00:29:08.784328361Z" level=warning msg="cleaning up after shim disconnected" id=7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be namespace=k8s.io Jul 2 00:29:08.784332 containerd[1736]: time="2024-07-02T00:29:08.784340081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:29:08.805394 kubelet[3211]: I0702 00:29:08.805338 3211 status_manager.go:853] "Failed to get status for pod" podUID="79ebbea4-fb32-4128-a399-17a785b530cf" pod="tigera-operator/tigera-operator-76ff79f7fd-mx6lr" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.12:45676->10.200.20.11:2379: read: connection timed out" Jul 2 00:29:09.362918 kubelet[3211]: I0702 00:29:09.362314 3211 scope.go:117] "RemoveContainer" containerID="d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d" Jul 2 00:29:09.362918 kubelet[3211]: I0702 00:29:09.362631 3211 scope.go:117] "RemoveContainer" containerID="7619355b9b84633cc55cf4bf72b962b2458b08fb038b0ae88db86708b41a51be" Jul 2 00:29:09.362918 kubelet[3211]: E0702 00:29:09.362849 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-76ff79f7fd-mx6lr_tigera-operator(79ebbea4-fb32-4128-a399-17a785b530cf)\"" pod="tigera-operator/tigera-operator-76ff79f7fd-mx6lr" podUID="79ebbea4-fb32-4128-a399-17a785b530cf" Jul 2 00:29:09.364632 containerd[1736]: time="2024-07-02T00:29:09.364578788Z" level=info msg="RemoveContainer for \"d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d\"" Jul 2 00:29:09.372364 containerd[1736]: time="2024-07-02T00:29:09.372313805Z" level=info msg="RemoveContainer for \"d933c7a50b2e40843ac3c98e03e751f37830490cb64f132c638c3de4b4d0a59d\" returns successfully" Jul 2 00:29:11.831024 kubelet[3211]: E0702 00:29:11.830790 3211 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-3e8d94ffa6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:29:21.142609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.158629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.176261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.191432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.206452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.222043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.222356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.237858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.238132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.262037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.262329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.270159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.270443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.285842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.294696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.303069 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.310957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.319329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.327590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.327840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.351037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.351419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.351552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.366886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.367300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.382758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.383112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.398800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.399166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.415499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.415877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.431999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.440625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.448956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.457588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.466278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.466642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.483301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.483675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.500024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.508685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.508996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.525265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.525812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.541169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.541489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.556947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.557251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.572414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.588111 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.588454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.588598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.603765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.611788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.620142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.628086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.636501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.636796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.651985 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.659947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.667983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.675933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.676445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.691794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.700097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.700350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.715783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.716149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.731789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.732091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.747603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.755590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.763605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.771536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.779663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.787865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.788110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.804814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.812869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.821216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.821496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.833506 kubelet[3211]: E0702 00:29:21.831824 3211 controller.go:195] "Failed to update lease" err="Put \"https://10.200.20.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.1.1-a-3e8d94ffa6?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:29:21.837850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.838184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.854297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.854661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.870058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.870383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.885728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.893774 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.894101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.909842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.917838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.925759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.933724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.941650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.950006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.950248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.965819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.966205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.981846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.982183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.997487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:21.997786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.014117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.014353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.030416 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.038952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.039189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.056016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.066931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.076045 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.084652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.085353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.102079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.110868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.111000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.126595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.126886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.143581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.143968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.160004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.168759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.168990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.185594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.185948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.223144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.223509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 00:29:22.223650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#234 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001